questions
stringlengths 4
1.65k
| answers
stringlengths 1.73k
353k
| site
stringclasses 24
values | answers_cleaned
stringlengths 1.73k
353k
|
---|---|---|---|
vault page title Product usage reporting include alerts enterprise only mdx Learn what anonymous usage data HashiCorp collects as part of Enterprise utilization reporting Enable or disable collection layout docs Product usage reporting | ---
layout: docs
page_title: Product usage reporting
description: >-
Learn what anonymous usage data HashiCorp collects as part of Enterprise utilization reporting. Enable or disable collection.
---
# Product usage reporting
@include 'alerts/enterprise-only.mdx'
HashiCorp collects usage data about how Vault clusters are being used. This data is not
used for billing or and is numerical only, and no sensitive information of
any nature is being collected. The data is GDPR compliant and is collected as part of
the [license utilization reporting](/vault/docs/enterprise/license/utilization-reporting)
process. If automated reporting is enabled, this data will be collected automatically.
If automated reporting is disabled, then this will be collected as part of the manual reports.
## Opt out
While none of the collected usage metrics are sensitive in any way, if you are still concerned
about these usage metrics being reported, then you can opt-out of them being collected.
If you are considering opting out because you’re worried about the data, we
strongly recommend that you review the [usage metrics list](#usage-metrics-list)
before opting out. If you have concerns with any of the automatically-reported
data please bring them to your account manager.
You have two options to opt out of product usage collection:
- HCL configuration (recommended)
- Environment variable (requires restart)
#### HCL configuration
Opting out in your product's configuration file doesn't require a system
restart, and is the method we recommend. Add the following block to your server
configuration file (e.g. `vault-config.hcl`).
```hcl
reporting {
disable_product_usage_reporting = true
}
```
<Warning>
When you have a cluster, each node must have the reporting stanza in its
configuration to be consistent. In the event of leadership change, nodes will
use its server configuration to determine whether or not to opt-out the
product usage collection. Inconsistent configuration between nodes will change the
reporting status upon active unseal.
</Warning>
You will find the following entries in the server log.
<CodeBlockConfig hideClipboard>
```
[DEBUG] activity: there is no reporting agent configured, skipping counts reporting
```
</CodeBlockConfig>
#### Environment variable
If you need to, you can also opt out using an environment variable, which will
provide a startup message confirming that you have product usage data collection.
This option requires a system restart.
<Note>
If the reporting stanza exists in the configuration file, the
`OPTOUT_PRODUCT_USAGE_REPORTING` value overrides the configuration.
</Note>
Set the following environment variable.
```shell-session
$ export OPTOUT_PRODUCT_USAGE_REPORTING=true
```
Now, restart your [Vault servers](/vault/docs/commands/server) from the shell
where you set the environment variable.
You will find the following entries in the server log.
<CodeBlockConfig hideClipboard>
```
[DEBUG] core: product usage reporting disabled
```
</CodeBlockConfig>
If your configuration file and environment variable differ, the environment
variable setting will take precedence.
## Usage metrics list
HashiCorp collects the following product usage metrics as part of the `metrics` part of the
[JSON payload that it collects for licence utilization](/vault/docs/enterprise/license/utilization-reporting#example-payloads).
All of these metrics are numerical, and contain no sensitive values or additional metadata:
| Metric Name | Description |
|------------------------------------------------------|------------------------------------------------------------------------------------|
| `vault.namespaces.count` | Total number of namespaces. |
| `vault.leases.count` | Total number of leases within Vault. |
| `vault.quotas.ratelimit.count` | Total number of rate limit quotas within Vault. |
| `vault.quotas.leasecount.count` | Total number of lease count quotas within Vault. |
| `vault.kv.version1.secrets.count` | Total number of KVv1 secrets within Vault. |
| `vault.kv.version2.secrets.count` | Total number of KVv2 secrets within Vault. |
| `vault.kv.version1.secrets.namespace.max` | The highest number of KVv1 secrets in a namespace in Vault, e.g. `1000`. |
| `vault.kv.version2.secrets.namespace.max` | The highest number of KVv2 secrets in a namespace in Vault, e.g. `1000`. |
| `vault.kv.version1.secrets.namespace.min` | The lowest number of KVv1 secrets in a namespace in Vault, e.g. `2`. |
| `vault.kv.version2.secrets.namespace.min` | The highest number of KVv2 secrets in a namespace in Vault, e.g. `1000`. |
| `vault.kv.version1.secrets.namespace.mean` | The mean number of KVv1 secrets in namespaces in Vault, e.g. `52.8`. |
| `vault.kv.version2.secrets.namespace.mean` | The mean number of KVv2 secrets in namespaces in Vault, e.g. `52.8`. |
| `vault.auth.method.approle.count` | The total number of Approle auth mounts in Vault. |
| `vault.auth.method.alicloud.count` | The total number of Alicloud auth mounts in Vault. |
| `vault.auth.method.aws.count` | The total number of AWS auth mounts in Vault. |
| `vault.auth.method.appid.count` | The total number of App ID auth mounts in Vault. |
| `vault.auth.method.azure.count` | The total number of Azure auth mounts in Vault. |
| `vault.auth.method.cloudfoundry.count` | The total number of Cloud Foundry auth mounts in Vault. |
| `vault.auth.method.github.count` | The total number of GitHub auth mounts in Vault. |
| `vault.auth.method.gcp.count` | The total number of GCP auth mounts in Vault. |
| `vault.auth.method.jwt.count` | The total number of JWT auth mounts in Vault. |
| `vault.auth.method.kerberos.count` | The total number of Kerberos auth mounts in Vault. |
| `vault.auth.method.kubernetes.count` | The total number of Kubernetes auth mounts in Vault. |
| `vault.auth.method.ldap.count` | The total number of LDAP auth mounts in Vault. |
| `vault.auth.method.oci.count` | The total number of OCI auth mounts in Vault. |
| `vault.auth.method.okta.count` | The total number of Okta auth mounts in Vault. |
| `vault.auth.method.pcf.count` | The total number of PCF auth mounts in Vault. |
| `vault.auth.method.radius.count` | The total number of Radius auth mounts in Vault. |
| `vault.auth.method.saml.count` | The total number of SAML auth mounts in Vault. |
| `vault.auth.method.cert.count` | The total number of Cert auth mounts in Vault. |
| `vault.auth.method.oidc.count` | The total number of OIDC auth mounts in Vault. |
| `vault.auth.method.token.count` | The total number of Token auth mounts in Vault. |
| `vault.auth.method.userpass.count` | The total number of Userpass auth mounts in Vault. |
| `vault.auth.method.plugin.count` | The total number of custom plugin auth mounts in Vault. |
| `vault.secret.engine.activedirectory.count` | The total number of Active Directory secret engines in Vault. |
| `vault.secret.engine.alicloud.count` | The total number of Alicloud secret engines in Vault. |
| `vault.secret.engine.aws.count` | The total number of AWS secret engines in Vault. |
| `vault.secret.engine.azure.count` | The total number of Azure secret engines in Vault. |
| `vault.secret.engine.consul.count` | The total number of Consul secret engines in Vault. |
| `vault.secret.engine.gcp.count` | The total number of GCP secret engines in Vault. |
| `vault.secret.engine.gcpkms.count` | The total number of GCPKMS secret engines in Vault. |
| `vault.secret.engine.kubernetes.count` | The total number of Kubernetes secret engines in Vault. |
| `vault.secret.engine.cassandra.count` | The total number of Cassandra secret engines in Vault. |
| `vault.secret.engine.keymgmt.count` | The total number of Keymgmt secret engines in Vault. |
| `vault.secret.engine.kv.count` | The total number of KV secret engines in Vault. |
| `vault.secret.engine.kmip.count` | The total number of KMIP secret engines in Vault. |
| `vault.secret.engine.mongodb.count` | The total number of MongoDB secret engines in Vault. |
| `vault.secret.engine.mongodbatlas.count` | The total number of MongoDBAtlas secret engines in Vault. |
| `vault.secret.engine.mssql.count` | The total number of MSSql secret engines in Vault. |
| `vault.secret.engine.postgresql.count` | The total number of Postgresql secret engines in Vault. |
| `vault.secret.engine.nomad.count` | The total number of Nomad secret engines in Vault. |
| `vault.secret.engine.ldap.count` | The total number of LDAP secret engines in Vault. |
| `vault.secret.engine.openldap.count` | The total number of OpenLDAP secret engines in Vault. |
| `vault.secret.engine.pki.count` | The total number of PKI secret engines in Vault. |
| `vault.secret.engine.rabbitmq.count` | The total number of RabbitMQ secret engines in Vault. |
| `vault.secret.engine.ssh.count` | The total number of SSH secret engines in Vault. |
| `vault.secret.engine.terraform.count` | The total number of Terraform secret engines in Vault. |
| `vault.secret.engine.totp.count` | The total number of TOTP secret engines in Vault. |
| `vault.secret.engine.transform.count` | The total number of Transform secret engines in Vault. |
| `vault.secret.engine.transit.count` | The total number of Transit secret engines in Vault. |
| `vault.secret.engine.database.count` | The total number of Database secret engines in Vault. |
| `vault.secret.engine.plugin.count` | The total number of custom plugin secret engines in Vault. |
| `vault.secretsync.sources.count` | The total number of secret sources configured for secret sync. |
| `vault.secretsync.destinations.count` | The total number of secret destinations configured for secret sync. |
| `vault.secretsync.destinations.aws-sm.count` | The total number of AWS-SM secret destinations configured for secret sync. |
| `vault.secretsync.destinations.azure-kv.count` | The total number of Azure-KV secret destinations configured for secret sync. |
| `vault.secretsync.destinations.gh.count` | The total number of GH secret destinations configured for secret sync. |
| `vault.secretsync.destinations.vault.count` | The total number of Vault secret destinations configured for secret sync. |
| `vault.secretsync.destinations.vercel-project.count` | The total number of Vercel Project secret destinations configured for secret sync. |
| `vault.secretsync.destinations.terraform.count` | The total number of Terraform secret destinations configured for secret sync. |
| `vault.secretsync.destinations.gitlab.count` | The total number of GitLab secret destinations configured for secret sync. |
| `vault.secretsync.destinations.inmem.count` | The total number of InMem secret destinations configured for secret sync. |
| `vault.pki.roles.count` | The total roles in all PKI mounts across all namespaces. |
| `vault.pki.issuers.count` | The total issuers from all PKI mounts across all namespaces. |
## Usage metadata list
HashiCorp collects the following product usage metadata as part of the `metadata` part of the
[JSON payload that it collects for licence utilization](/vault/docs/enterprise/license/utilization-reporting#example-payloads):
| Metadata Name | Description |
|----------------------|----------------------------------------------------------------------|
| `replication_status` | Replication status of this cluster, e.g. `perf-disabled,dr-disabled` | | vault | layout docs page title Product usage reporting description Learn what anonymous usage data HashiCorp collects as part of Enterprise utilization reporting Enable or disable collection Product usage reporting include alerts enterprise only mdx HashiCorp collects usage data about how Vault clusters are being used This data is not used for billing or and is numerical only and no sensitive information of any nature is being collected The data is GDPR compliant and is collected as part of the license utilization reporting vault docs enterprise license utilization reporting process If automated reporting is enabled this data will be collected automatically If automated reporting is disabled then this will be collected as part of the manual reports Opt out While none of the collected usage metrics are sensitive in any way if you are still concerned about these usage metrics being reported then you can opt out of them being collected If you are considering opting out because you re worried about the data we strongly recommend that you review the usage metrics list usage metrics list before opting out If you have concerns with any of the automatically reported data please bring them to your account manager You have two options to opt out of product usage collection HCL configuration recommended Environment variable requires restart HCL configuration Opting out in your product s configuration file doesn t require a system restart and is the method we recommend Add the following block to your server configuration file e g vault config hcl hcl reporting disable product usage reporting true Warning When you have a cluster each node must have the reporting stanza in its configuration to be consistent In the event of leadership change nodes will use its server configuration to determine whether or not to opt out the product usage collection Inconsistent configuration between nodes will change the reporting status upon active unseal Warning You will find the following entries in the server log CodeBlockConfig hideClipboard DEBUG activity there is no reporting agent configured skipping counts reporting CodeBlockConfig Environment variable If you need to you can also opt out using an environment variable which will provide a startup message confirming that you have product usage data collection This option requires a system restart Note If the reporting stanza exists in the configuration file the OPTOUT PRODUCT USAGE REPORTING value overrides the configuration Note Set the following environment variable shell session export OPTOUT PRODUCT USAGE REPORTING true Now restart your Vault servers vault docs commands server from the shell where you set the environment variable You will find the following entries in the server log CodeBlockConfig hideClipboard DEBUG core product usage reporting disabled CodeBlockConfig If your configuration file and environment variable differ the environment variable setting will take precedence Usage metrics list HashiCorp collects the following product usage metrics as part of the metrics part of the JSON payload that it collects for licence utilization vault docs enterprise license utilization reporting example payloads All of these metrics are numerical and contain no sensitive values or additional metadata Metric Name Description vault namespaces count Total number of namespaces vault leases count Total number of leases within Vault vault quotas ratelimit count Total number of rate limit quotas within Vault vault quotas leasecount count Total number of lease count quotas within Vault vault kv version1 secrets count Total number of KVv1 secrets within Vault vault kv version2 secrets count Total number of KVv2 secrets within Vault vault kv version1 secrets namespace max The highest number of KVv1 secrets in a namespace in Vault e g 1000 vault kv version2 secrets namespace max The highest number of KVv2 secrets in a namespace in Vault e g 1000 vault kv version1 secrets namespace min The lowest number of KVv1 secrets in a namespace in Vault e g 2 vault kv version2 secrets namespace min The highest number of KVv2 secrets in a namespace in Vault e g 1000 vault kv version1 secrets namespace mean The mean number of KVv1 secrets in namespaces in Vault e g 52 8 vault kv version2 secrets namespace mean The mean number of KVv2 secrets in namespaces in Vault e g 52 8 vault auth method approle count The total number of Approle auth mounts in Vault vault auth method alicloud count The total number of Alicloud auth mounts in Vault vault auth method aws count The total number of AWS auth mounts in Vault vault auth method appid count The total number of App ID auth mounts in Vault vault auth method azure count The total number of Azure auth mounts in Vault vault auth method cloudfoundry count The total number of Cloud Foundry auth mounts in Vault vault auth method github count The total number of GitHub auth mounts in Vault vault auth method gcp count The total number of GCP auth mounts in Vault vault auth method jwt count The total number of JWT auth mounts in Vault vault auth method kerberos count The total number of Kerberos auth mounts in Vault vault auth method kubernetes count The total number of Kubernetes auth mounts in Vault vault auth method ldap count The total number of LDAP auth mounts in Vault vault auth method oci count The total number of OCI auth mounts in Vault vault auth method okta count The total number of Okta auth mounts in Vault vault auth method pcf count The total number of PCF auth mounts in Vault vault auth method radius count The total number of Radius auth mounts in Vault vault auth method saml count The total number of SAML auth mounts in Vault vault auth method cert count The total number of Cert auth mounts in Vault vault auth method oidc count The total number of OIDC auth mounts in Vault vault auth method token count The total number of Token auth mounts in Vault vault auth method userpass count The total number of Userpass auth mounts in Vault vault auth method plugin count The total number of custom plugin auth mounts in Vault vault secret engine activedirectory count The total number of Active Directory secret engines in Vault vault secret engine alicloud count The total number of Alicloud secret engines in Vault vault secret engine aws count The total number of AWS secret engines in Vault vault secret engine azure count The total number of Azure secret engines in Vault vault secret engine consul count The total number of Consul secret engines in Vault vault secret engine gcp count The total number of GCP secret engines in Vault vault secret engine gcpkms count The total number of GCPKMS secret engines in Vault vault secret engine kubernetes count The total number of Kubernetes secret engines in Vault vault secret engine cassandra count The total number of Cassandra secret engines in Vault vault secret engine keymgmt count The total number of Keymgmt secret engines in Vault vault secret engine kv count The total number of KV secret engines in Vault vault secret engine kmip count The total number of KMIP secret engines in Vault vault secret engine mongodb count The total number of MongoDB secret engines in Vault vault secret engine mongodbatlas count The total number of MongoDBAtlas secret engines in Vault vault secret engine mssql count The total number of MSSql secret engines in Vault vault secret engine postgresql count The total number of Postgresql secret engines in Vault vault secret engine nomad count The total number of Nomad secret engines in Vault vault secret engine ldap count The total number of LDAP secret engines in Vault vault secret engine openldap count The total number of OpenLDAP secret engines in Vault vault secret engine pki count The total number of PKI secret engines in Vault vault secret engine rabbitmq count The total number of RabbitMQ secret engines in Vault vault secret engine ssh count The total number of SSH secret engines in Vault vault secret engine terraform count The total number of Terraform secret engines in Vault vault secret engine totp count The total number of TOTP secret engines in Vault vault secret engine transform count The total number of Transform secret engines in Vault vault secret engine transit count The total number of Transit secret engines in Vault vault secret engine database count The total number of Database secret engines in Vault vault secret engine plugin count The total number of custom plugin secret engines in Vault vault secretsync sources count The total number of secret sources configured for secret sync vault secretsync destinations count The total number of secret destinations configured for secret sync vault secretsync destinations aws sm count The total number of AWS SM secret destinations configured for secret sync vault secretsync destinations azure kv count The total number of Azure KV secret destinations configured for secret sync vault secretsync destinations gh count The total number of GH secret destinations configured for secret sync vault secretsync destinations vault count The total number of Vault secret destinations configured for secret sync vault secretsync destinations vercel project count The total number of Vercel Project secret destinations configured for secret sync vault secretsync destinations terraform count The total number of Terraform secret destinations configured for secret sync vault secretsync destinations gitlab count The total number of GitLab secret destinations configured for secret sync vault secretsync destinations inmem count The total number of InMem secret destinations configured for secret sync vault pki roles count The total roles in all PKI mounts across all namespaces vault pki issuers count The total issuers from all PKI mounts across all namespaces Usage metadata list HashiCorp collects the following product usage metadata as part of the metadata part of the JSON payload that it collects for licence utilization vault docs enterprise license utilization reporting example payloads Metadata Name Description replication status Replication status of this cluster e g perf disabled dr disabled |
vault include alerts enterprise only mdx page title Manual license utilization reporting Manual license utilization reporting allows you to export review and send license utilization data to HashiCorp through the CLI or HCP Web Portal layout docs Manual license utilization reporting | ---
layout: docs
page_title: Manual license utilization reporting
description: >-
Manual license utilization reporting allows you to export, review, and send license utilization data to HashiCorp through the CLI or HCP Web Portal.
---
# Manual license utilization reporting
@include 'alerts/enterprise-only.mdx'
Manual license utilization reporting allows you to export, review, and send
license utilization data to HashiCorp via the CLI or HCP Web Portal. Use these
reports to understand how much more you can deploy under your current contract,
protect against overutilization, and budget for predicted consumption. Manual
reporting shares the minimum data required to validate license utilization as
defined in our contracts. The reports consist of mostly computed metrics and
will never contain Personal Identifiable Information (PII) or other sensitive
information.
Manual license utilization shares the same data as automated license utilization
but is more time consuming. Unless you are running in an air-gapped environment
or have another reason to report data manually, we strongly recommend using
automated reporting instead. If you have disabled automated license reporting,
you can re-enable it by reversing the opt-out process described in the
[documentation](/vault/docs/enterprise/license/utilization-reporting#opt-out).
If you are considering manual reporting because you’re worried about your data,
we strongly recommend that you review the [example
payloads](#data-file-content), which are the same for automated and manual
reporting. If you have further concerns with any of the automatically-reported
data please bring them to your account manager before opting out of automated
reporting in favor of manual reporting.
## How to manually send data reports
### Generate a data bundle
Data bundles include collections of JSON snapshots that contain license
utilization information.
1. Login into your [cluster node](/vault/tutorials/cloud/vault-access-cluster).
1. Run this CLI command to generate a data bundle:
```shell-session
$ vault operator utilization
```
By default, the bundle will include all historical snapshots.
You can provide context about the conditions under which the report was
generated and submitted by providing a comment. This optional comment will
not be included in the license utilization bundle, but will be included in
the Vault server logs.
**Example:**
```shell-session
$ vault operator utilization -message=”Change Control 654987” \
-output=”/utilization/reports/latest.json”
```
This command will export all the persisted snapshots into a bundle. The
message “Change Control 654987” will not be included in the bundle but will
be included in Vault server logs. The `-output` flags specifies the output
location of the JSON bundle.
**Available command flags:**
- `-message` `(string: “”)` - Provide context about the conditions under
which the report was generated and submitted. This message is not included
in the license utilization bundle but will be included in the vault server
logs. (optional)
- `-today-only` `(bool: false)` - To include only today’s snapshot, no
historical snapshots. If no snapshots were persisted in the last 24 hrs, it
takes a snapshot and exports it to a bundle. (optional)
- `-output` `(string: “”)` - Specifies the output path for the bundle.
Defaults to a time-based generated file name. (optional)
### Send the data bundle to HashiCorp
1. Go to https://portal.cloud.hashicorp.com/license-utilization/reports/create
1. Click on **Choose files**, or drop your file(s) into the container.
a. If the upload succeeded, the HCP user interface will change the file
status to **Uploaded** in green.
b. If the upload failed, the file status will say **Failed** in red, and
will include error information.
If the upload fails make sure you haven’t modified the file signature. If the
error persists, please contact your account representative.
## Enable manual reporting
Upgrade to a release that supports manual license utilization reporting. These
releases include:
- Vault Enterprise 1.16.0 and later
- Vault Enterprise 1.15.6 and later
- Vault Enterprise 1.14.10 and later
## Configuration
Administrators can manage disk space for storing snapshots by defining the
number of days snapshots can be retained.
```hcl
reporting {
snapshot_retention_time = "2400h"
}
```
The default retention period is 400 days.
## Data file content
<CodeBlockConfig hideClipboard>
```json
{
"snapshot_version": 2,
"id": "0001JWAY00BRF8TEXC9CVRHBAC",
"timestamp": "2024-02-08T16:55:28.085215-08:00",
"schema_version": "2.0.0",
"product": "vault",
"process_id": "01HP5NJS21HN50FY0CBS0SYGCH",
"metrics": {
"clientcount.current_month_estimate.type.acme_client": {
"key": "clientcount.current_month_estimate.type.acme_client",
"value": 0,
"mode": "write"
},
"clientcount.current_month_estimate.type.entity": {
"key": "clientcount.current_month_estimate.type.entity",
"value": 20,
"mode": "write"
},
"clientcount.current_month_estimate.type.nonentity": {
"key": "clientcount.current_month_estimate.type.nonentity",
"value": 11,
"mode": "write"
},
"clientcount.current_month_estimate.type.secret_sync": {
"key": "clientcount.current_month_estimate.type.secret_sync",
"value": 0,
"mode": "write"
},
"clientcount.previous_month_complete.type.acme_client": {
"key": "clientcount.previous_month_complete.type.acme_client",
"value": 0,
"mode": "write"
},
"clientcount.previous_month_complete.type.entity": {
"key": "clientcount.previous_month_complete.type.entity",
"value": 0,
"mode": "write"
},
"clientcount.previous_month_complete.type.nonentity": {
"key": "clientcount.previous_month_complete.type.nonentity",
"value": 0,
"mode": "write"
},
"clientcount.previous_month_complete.type.secret_sync": {
"key": "clientcount.previous_month_complete.type.secret_sync",
"value": 0,
"mode": "write"
}
},
"product_version": "1.16.0+ent",
"license_id": "7d68b16a-74fe-3b9f-a1a7-08cf461fff1c",
"checksum": 6861637915450723051,
"metadata": {
"billing_start": "2023-05-04T00:00:00Z",
"cluster_id": "16d0ff5b-9d40-d7a7-384c-c9b95320c60e"
}
```
</CodeBlockConfig> | vault | layout docs page title Manual license utilization reporting description Manual license utilization reporting allows you to export review and send license utilization data to HashiCorp through the CLI or HCP Web Portal Manual license utilization reporting include alerts enterprise only mdx Manual license utilization reporting allows you to export review and send license utilization data to HashiCorp via the CLI or HCP Web Portal Use these reports to understand how much more you can deploy under your current contract protect against overutilization and budget for predicted consumption Manual reporting shares the minimum data required to validate license utilization as defined in our contracts The reports consist of mostly computed metrics and will never contain Personal Identifiable Information PII or other sensitive information Manual license utilization shares the same data as automated license utilization but is more time consuming Unless you are running in an air gapped environment or have another reason to report data manually we strongly recommend using automated reporting instead If you have disabled automated license reporting you can re enable it by reversing the opt out process described in the documentation vault docs enterprise license utilization reporting opt out If you are considering manual reporting because you re worried about your data we strongly recommend that you review the example payloads data file content which are the same for automated and manual reporting If you have further concerns with any of the automatically reported data please bring them to your account manager before opting out of automated reporting in favor of manual reporting How to manually send data reports Generate a data bundle Data bundles include collections of JSON snapshots that contain license utilization information 1 Login into your cluster node vault tutorials cloud vault access cluster 1 Run this CLI command to generate a data bundle shell session vault operator utilization By default the bundle will include all historical snapshots You can provide context about the conditions under which the report was generated and submitted by providing a comment This optional comment will not be included in the license utilization bundle but will be included in the Vault server logs Example shell session vault operator utilization message Change Control 654987 output utilization reports latest json This command will export all the persisted snapshots into a bundle The message Change Control 654987 will not be included in the bundle but will be included in Vault server logs The output flags specifies the output location of the JSON bundle Available command flags message string Provide context about the conditions under which the report was generated and submitted This message is not included in the license utilization bundle but will be included in the vault server logs optional today only bool false To include only today s snapshot no historical snapshots If no snapshots were persisted in the last 24 hrs it takes a snapshot and exports it to a bundle optional output string Specifies the output path for the bundle Defaults to a time based generated file name optional Send the data bundle to HashiCorp 1 Go to https portal cloud hashicorp com license utilization reports create 1 Click on Choose files or drop your file s into the container a If the upload succeeded the HCP user interface will change the file status to Uploaded in green b If the upload failed the file status will say Failed in red and will include error information If the upload fails make sure you haven t modified the file signature If the error persists please contact your account representative Enable manual reporting Upgrade to a release that supports manual license utilization reporting These releases include Vault Enterprise 1 16 0 and later Vault Enterprise 1 15 6 and later Vault Enterprise 1 14 10 and later Configuration Administrators can manage disk space for storing snapshots by defining the number of days snapshots can be retained hcl reporting snapshot retention time 2400h The default retention period is 400 days Data file content CodeBlockConfig hideClipboard json snapshot version 2 id 0001JWAY00BRF8TEXC9CVRHBAC timestamp 2024 02 08T16 55 28 085215 08 00 schema version 2 0 0 product vault process id 01HP5NJS21HN50FY0CBS0SYGCH metrics clientcount current month estimate type acme client key clientcount current month estimate type acme client value 0 mode write clientcount current month estimate type entity key clientcount current month estimate type entity value 20 mode write clientcount current month estimate type nonentity key clientcount current month estimate type nonentity value 11 mode write clientcount current month estimate type secret sync key clientcount current month estimate type secret sync value 0 mode write clientcount previous month complete type acme client key clientcount previous month complete type acme client value 0 mode write clientcount previous month complete type entity key clientcount previous month complete type entity value 0 mode write clientcount previous month complete type nonentity key clientcount previous month complete type nonentity value 0 mode write clientcount previous month complete type secret sync key clientcount previous month complete type secret sync value 0 mode write product version 1 16 0 ent license id 7d68b16a 74fe 3b9f a1a7 08cf461fff1c checksum 6861637915450723051 metadata billing start 2023 05 04T00 00 00Z cluster id 16d0ff5b 9d40 d7a7 384c c9b95320c60e CodeBlockConfig |
vault Vault Enterprise supports TOTP MFA type page title TOTP MFA MFA Support Vault Enterprise include alerts enterprise only mdx layout docs TOTP MFA | ---
layout: docs
page_title: TOTP MFA - MFA Support - Vault Enterprise
description: Vault Enterprise supports TOTP MFA type.
---
# TOTP MFA
@include 'alerts/enterprise-only.mdx'
This page demonstrates the TOTP MFA on ACL'd paths of Vault.
## Configuration
1. Enable the appropriate auth method:
```text
$ vault auth enable userpass
```
1. Fetch the mount accessor for the enabled auth method:
```text
$ vault auth list -detailed
```
The response will look like:
```text
Path Type Accessor Plugin Default TTL Max TTL Replication Description
---- ---- -------- ------ ----------- ------- ----------- -----------
token/ token auth_token_289703e9 n/a system system replicated token based credentials
userpass/ userpass auth_userpass_54b8e339 n/a system system replicated n/a
```
1. Configure TOTP MFA:
-> **Note**: Consider the algorithms supported by your authenticator. For example, Google Authenticator for Android supports only SHA1 as the value of `algorithm`.
```text
$ vault write sys/mfa/method/totp/my_totp \
issuer=Vault \
period=30 \
key_size=30 \
algorithm=SHA256 \
digits=6
```
1. Create a policy that gives access to secret through the MFA method created
above:
```text
$ vault policy write totp-policy -<<EOF
path "secret/foo" {
capabilities = ["read"]
mfa_methods = ["my_totp"]
}
EOF
```
1. Create a user. MFA works only for tokens that have identity information on
them. Tokens created by logging in using auth methods will have the associated
identity information. Create a user in the `userpass` auth method and
authenticate against it:
```text
$ vault write auth/userpass/users/testuser \
password=testpassword \
policies=totp-policy
```
1. Create a login token:
```text
$ vault write auth/userpass/login/testuser \
password=testpassword
Key Value
--- -----
token 70f97438-e174-c03c-40fe-6bcdc1028d6c
token_accessor a91d97f4-1c7d-6af3-e4bf-971f74f9fab9
token_duration 768h
token_renewable true
token_policies [default totp-policy]
token_meta_username "testuser"
```
Note that the CLI is not authenticated with the newly created token yet, we
did not call `vault login`, instead we used the login API to simply return a
token.
1. Fetch the entity ID from the token. The caller identity is represented by the
`entity_id` property of the token:
```text
$ vault token lookup 70f97438-e174-c03c-40fe-6bcdc1028d6c
Key Value
--- -----
accessor a91d97f4-1c7d-6af3-e4bf-971f74f9fab9
creation_time 1502245243
creation_ttl 2764800
display_name userpass-testuser
entity_id 307d6c16-6f5c-4ae7-46a9-2d153ffcbc63
expire_time 2017-09-09T22:20:43.448543132-04:00
explicit_max_ttl 0
id 70f97438-e174-c03c-40fe-6bcdc1028d6c
issue_time 2017-08-08T22:20:43.448543003-04:00
meta map[username:testuser]
num_uses 0
orphan true
path auth/userpass/login/testuser
policies [default totp-policy]
renewable true
ttl 2764623
```
1. Generate TOTP method attached to the entity. This should be distributed to
the intended user to be able to generate TOTP passcode:
```text
$ vault write sys/mfa/method/totp/my_totp/admin-generate \
entity_id=307d6c16-6f5c-4ae7-46a9-2d153ffcbc63
Key Value
--- -----
barcode iVBORw0KGgoAAAANSUhEUgAAAM...
url otpauth://totp/Vault:307d6c16-6f5c-4ae7-46a9-2d153ffcbc63?algo...
```
Either the base64 encoded png barcode or the url should be given to the end
user. This barcode/url can be loaded into Google Authenticator or a similar
TOTP tool to generate codes.
1. Login as the user:
```text
$ vault login 70f97438-e174-c03c-40fe-6bcdc1028d6c
```
1. Read the secret, specifying the mfa flag:
```text
$ vault read -mfa my_totp:146378 secret/data/foo
Key Value
--- -----
refresh_interval 768h
data which can only be read after MFA validation
``` | vault | layout docs page title TOTP MFA MFA Support Vault Enterprise description Vault Enterprise supports TOTP MFA type TOTP MFA include alerts enterprise only mdx This page demonstrates the TOTP MFA on ACL d paths of Vault Configuration 1 Enable the appropriate auth method text vault auth enable userpass 1 Fetch the mount accessor for the enabled auth method text vault auth list detailed The response will look like text Path Type Accessor Plugin Default TTL Max TTL Replication Description token token auth token 289703e9 n a system system replicated token based credentials userpass userpass auth userpass 54b8e339 n a system system replicated n a 1 Configure TOTP MFA Note Consider the algorithms supported by your authenticator For example Google Authenticator for Android supports only SHA1 as the value of algorithm text vault write sys mfa method totp my totp issuer Vault period 30 key size 30 algorithm SHA256 digits 6 1 Create a policy that gives access to secret through the MFA method created above text vault policy write totp policy EOF path secret foo capabilities read mfa methods my totp EOF 1 Create a user MFA works only for tokens that have identity information on them Tokens created by logging in using auth methods will have the associated identity information Create a user in the userpass auth method and authenticate against it text vault write auth userpass users testuser password testpassword policies totp policy 1 Create a login token text vault write auth userpass login testuser password testpassword Key Value token 70f97438 e174 c03c 40fe 6bcdc1028d6c token accessor a91d97f4 1c7d 6af3 e4bf 971f74f9fab9 token duration 768h token renewable true token policies default totp policy token meta username testuser Note that the CLI is not authenticated with the newly created token yet we did not call vault login instead we used the login API to simply return a token 1 Fetch the entity ID from the token The caller identity is represented by the entity id property of the token text vault token lookup 70f97438 e174 c03c 40fe 6bcdc1028d6c Key Value accessor a91d97f4 1c7d 6af3 e4bf 971f74f9fab9 creation time 1502245243 creation ttl 2764800 display name userpass testuser entity id 307d6c16 6f5c 4ae7 46a9 2d153ffcbc63 expire time 2017 09 09T22 20 43 448543132 04 00 explicit max ttl 0 id 70f97438 e174 c03c 40fe 6bcdc1028d6c issue time 2017 08 08T22 20 43 448543003 04 00 meta map username testuser num uses 0 orphan true path auth userpass login testuser policies default totp policy renewable true ttl 2764623 1 Generate TOTP method attached to the entity This should be distributed to the intended user to be able to generate TOTP passcode text vault write sys mfa method totp my totp admin generate entity id 307d6c16 6f5c 4ae7 46a9 2d153ffcbc63 Key Value barcode iVBORw0KGgoAAAANSUhEUgAAAM url otpauth totp Vault 307d6c16 6f5c 4ae7 46a9 2d153ffcbc63 algo Either the base64 encoded png barcode or the url should be given to the end user This barcode url can be loaded into Google Authenticator or a similar TOTP tool to generate codes 1 Login as the user text vault login 70f97438 e174 c03c 40fe 6bcdc1028d6c 1 Read the secret specifying the mfa flag text vault read mfa my totp 146378 secret data foo Key Value refresh interval 768h data which can only be read after MFA validation |
vault Vault Enterprise has support for Multi factor Authentication MFA using page title MFA Support Vault Enterprise Vault enterprise MFA support different authentication types layout docs | ---
layout: docs
page_title: MFA Support - Vault Enterprise
description: >-
Vault Enterprise has support for Multi-factor Authentication (MFA), using
different authentication types.
---
# Vault enterprise MFA support
@include 'alerts/enterprise-only.mdx'
Vault Enterprise has support for Multi-factor Authentication (MFA), using
different authentication types. MFA is built on top of the Identity system of
Vault.
## MFA types
MFA in Vault can be of the following types.
- **Time-based One-time Password (TOTP)** - If configured and enabled on a path,
this would require a TOTP passcode along with Vault token, to be presented
while invoking the API request. The passcode will be validated against the
TOTP key present in the identity of the caller in Vault.
- **Okta** - If Okta push is configured and enabled on a path, then the enrolled
device of the user will get a push notification to approve or deny the access
to the API. The Okta username will be derived from the caller identity's
alias.
- **Duo** - If Duo push is configured and enabled on a path, then the enrolled
device of the user will get a push notification to approve or deny the access
to the API. The Duo username will be derived from the caller identity's
alias.
- **PingID** - If PingID push is configured and enabled on a path, then the
enrolled device of the user will get a push notification to approve or deny
the access to the API. The PingID username will be derived from the caller
identity's alias.
## Configuring MFA methods
MFA methods are globally managed within the `System Backend` using the HTTP API.
Please see [MFA API](/vault/api-docs/system/mfa) for details on how to configure an MFA
method.
## MFA methods in policies
MFA requirements on paths are specified as `mfa_methods` along with other ACL
parameters.
### Sample policy
```hcl
path "secret/foo" {
capabilities = ["read"]
mfa_methods = ["dev_team_duo", "sales_team_totp"]
}
```
The above policy grants `read` access to `secret/foo` only after _both_ the MFA
methods `dev_team_duo` and `sales_team_totp` are validated.
## Namespaces
All MFA configurations must be configured in the root namespace. They can be
referenced from ACL and Sentinel policies in any namespace via the method name
and can be tied to a mount accessor in any namespace.
When using [Sentinel
EGPs](/vault/docs/enterprise/sentinel#endpoint-governing-policies-egps),
any MFA configuration specified must be satisfied by all requests affected by
the policy, which can be difficult if the configured paths spread across
namespaces. One way to address this is to use a policy similar to the
following, using `or` operators to allow MFA configurations tied to mount
accessors in the various namespaces:
```python
import "mfa"
has_mfa = rule {
mfa.methods.duons1.valid
}
has_mfa2 = rule {
mfa.methods.duons2.valid
}
main = rule {
has_mfa or has_mfa2
}
```
When using TOTP, any user with ACL permissions can self-generate credentials.
Admins can generate or destroy credentials only if the targeted entity is in
the same namespace.
## Supplying MFA credentials
MFA credentials are retrieved from the `X-Vault-MFA` HTTP header. The format of
the header is `mfa_method_name[:key[=value]]`. The items in the `[]` are
optional.
### Sample request
```shell-session
$ curl \
--header "X-Vault-Token: ..." \
--header "X-Vault-MFA:my_totp:695452" \
http://127.0.0.1:8200/v1/secret/foo
```
## API
MFA can be managed entirely over the HTTP API. Please see [MFA API](/vault/api-docs/system/mfa) for more details.
## Additional resources
- [Duo MFA documentation](/vault/docs/enterprise/mfa/mfa-duo)
- [Okta MFA documentation](/vault/docs/enterprise/mfa/mfa-okta)
- [PingID MFA documentation](/vault/docs/enterprise/mfa/mfa-pingid)
- [TOTP MFA documentation](/vault/docs/enterprise/mfa/mfa-totp) | vault | layout docs page title MFA Support Vault Enterprise description Vault Enterprise has support for Multi factor Authentication MFA using different authentication types Vault enterprise MFA support include alerts enterprise only mdx Vault Enterprise has support for Multi factor Authentication MFA using different authentication types MFA is built on top of the Identity system of Vault MFA types MFA in Vault can be of the following types Time based One time Password TOTP If configured and enabled on a path this would require a TOTP passcode along with Vault token to be presented while invoking the API request The passcode will be validated against the TOTP key present in the identity of the caller in Vault Okta If Okta push is configured and enabled on a path then the enrolled device of the user will get a push notification to approve or deny the access to the API The Okta username will be derived from the caller identity s alias Duo If Duo push is configured and enabled on a path then the enrolled device of the user will get a push notification to approve or deny the access to the API The Duo username will be derived from the caller identity s alias PingID If PingID push is configured and enabled on a path then the enrolled device of the user will get a push notification to approve or deny the access to the API The PingID username will be derived from the caller identity s alias Configuring MFA methods MFA methods are globally managed within the System Backend using the HTTP API Please see MFA API vault api docs system mfa for details on how to configure an MFA method MFA methods in policies MFA requirements on paths are specified as mfa methods along with other ACL parameters Sample policy hcl path secret foo capabilities read mfa methods dev team duo sales team totp The above policy grants read access to secret foo only after both the MFA methods dev team duo and sales team totp are validated Namespaces All MFA configurations must be configured in the root namespace They can be referenced from ACL and Sentinel policies in any namespace via the method name and can be tied to a mount accessor in any namespace When using Sentinel EGPs vault docs enterprise sentinel endpoint governing policies egps any MFA configuration specified must be satisfied by all requests affected by the policy which can be difficult if the configured paths spread across namespaces One way to address this is to use a policy similar to the following using or operators to allow MFA configurations tied to mount accessors in the various namespaces python import mfa has mfa rule mfa methods duons1 valid has mfa2 rule mfa methods duons2 valid main rule has mfa or has mfa2 When using TOTP any user with ACL permissions can self generate credentials Admins can generate or destroy credentials only if the targeted entity is in the same namespace Supplying MFA credentials MFA credentials are retrieved from the X Vault MFA HTTP header The format of the header is mfa method name key value The items in the are optional Sample request shell session curl header X Vault Token header X Vault MFA my totp 695452 http 127 0 0 1 8200 v1 secret foo API MFA can be managed entirely over the HTTP API Please see MFA API vault api docs system mfa for more details Additional resources Duo MFA documentation vault docs enterprise mfa mfa duo Okta MFA documentation vault docs enterprise mfa mfa okta PingID MFA documentation vault docs enterprise mfa mfa pingid TOTP MFA documentation vault docs enterprise mfa mfa totp |
vault include alerts enterprise only mdx Vault Enterprise supports Duo MFA type Duo MFA page title Duo MFA MFA Support Vault Enterprise layout docs | ---
layout: docs
page_title: Duo MFA - MFA Support - Vault Enterprise
description: Vault Enterprise supports Duo MFA type.
---
# Duo MFA
@include 'alerts/enterprise-only.mdx'
This page demonstrates the Duo MFA on ACL'd paths of Vault.
## Configuration
1. Enable the appropriate auth method:
```text
$ vault auth enable userpass
```
1. Fetch the mount accessor for the enabled auth method:
```text
$ vault auth list -detailed
```
The response will look like:
```text
Path Type Accessor Plugin Default TTL Max TTL Replication Description
---- ---- -------- ------ ----------- ------- ----------- -----------
token/ token auth_token_289703e9 n/a system system replicated token based credentials
userpass/ userpass auth_userpass_54b8e339 n/a system system replicated n/a
```
1. Configure Duo MFA:
```text
$ vault write sys/mfa/method/duo/my_duo \
mount_accessor=auth_userpass_54b8e339 \
integration_key=BIACEUEAXI20BNWTEYXT \
secret_key=HIGTHtrIigh2rPZQMbguugt8IUftWhMRCOBzbuyz \
api_hostname=api-2b5c39f5.duosecurity.com
```
1. Create a policy that gives access to secret through the MFA method created
above:
```text
$ vault policy write duo-policy -<<EOF
path "secret/foo" {
capabilities = ["read"]
mfa_methods = ["my_duo"]
}
EOF
```
1. Create a user. MFA works only for tokens that have identity information on
them. Tokens created by logging in using auth methods will have the associated
identity information. Create a user in the `userpass` auth method and
authenticate against it:
```text
$ vault write auth/userpass/users/testuser \
password=testpassword \
policies=duo-policy
```
1. Create a login token:
```text
$ vault write auth/userpass/login/testuser \
password=testpassword
Key Value
--- -----
token 70f97438-e174-c03c-40fe-6bcdc1028d6c
token_accessor a91d97f4-1c7d-6af3-e4bf-971f74f9fab9
token_duration 768h
token_renewable true
token_policies [default duo-policy]
token_meta_username "testuser"
```
Note that the CLI is not authenticated with the newly created token yet, we
did not call `vault login`, instead we used the login API to simply return a
token.
1. Fetch the entity ID from the token. The caller identity is represented by the
`entity_id` property of the token:
```text
$ vault token lookup 70f97438-e174-c03c-40fe-6bcdc1028d6c
Key Value
--- -----
accessor a91d97f4-1c7d-6af3-e4bf-971f74f9fab9
creation_time 1502245243
creation_ttl 2764800
display_name userpass-testuser
entity_id 307d6c16-6f5c-4ae7-46a9-2d153ffcbc63
expire_time 2017-09-09T22:20:43.448543132-04:00
explicit_max_ttl 0
id 70f97438-e174-c03c-40fe-6bcdc1028d6c
issue_time 2017-08-08T22:20:43.448543003-04:00
meta map[username:testuser]
num_uses 0
orphan true
path auth/userpass/login/testuser
policies [default duo-policy]
renewable true
ttl 2764623
```
1. Login as the user:
```text
$ vault login 70f97438-e174-c03c-40fe-6bcdc1028d6c
```
1. Read a secret to trigger a Duo push. This will be a blocking call until
the push notification is either approved or declined:
```text
$ vault read secret/foo
Key Value
--- -----
refresh_interval 768h
data which can only be read after MFA validation
``` | vault | layout docs page title Duo MFA MFA Support Vault Enterprise description Vault Enterprise supports Duo MFA type Duo MFA include alerts enterprise only mdx This page demonstrates the Duo MFA on ACL d paths of Vault Configuration 1 Enable the appropriate auth method text vault auth enable userpass 1 Fetch the mount accessor for the enabled auth method text vault auth list detailed The response will look like text Path Type Accessor Plugin Default TTL Max TTL Replication Description token token auth token 289703e9 n a system system replicated token based credentials userpass userpass auth userpass 54b8e339 n a system system replicated n a 1 Configure Duo MFA text vault write sys mfa method duo my duo mount accessor auth userpass 54b8e339 integration key BIACEUEAXI20BNWTEYXT secret key HIGTHtrIigh2rPZQMbguugt8IUftWhMRCOBzbuyz api hostname api 2b5c39f5 duosecurity com 1 Create a policy that gives access to secret through the MFA method created above text vault policy write duo policy EOF path secret foo capabilities read mfa methods my duo EOF 1 Create a user MFA works only for tokens that have identity information on them Tokens created by logging in using auth methods will have the associated identity information Create a user in the userpass auth method and authenticate against it text vault write auth userpass users testuser password testpassword policies duo policy 1 Create a login token text vault write auth userpass login testuser password testpassword Key Value token 70f97438 e174 c03c 40fe 6bcdc1028d6c token accessor a91d97f4 1c7d 6af3 e4bf 971f74f9fab9 token duration 768h token renewable true token policies default duo policy token meta username testuser Note that the CLI is not authenticated with the newly created token yet we did not call vault login instead we used the login API to simply return a token 1 Fetch the entity ID from the token The caller identity is represented by the entity id property of the token text vault token lookup 70f97438 e174 c03c 40fe 6bcdc1028d6c Key Value accessor a91d97f4 1c7d 6af3 e4bf 971f74f9fab9 creation time 1502245243 creation ttl 2764800 display name userpass testuser entity id 307d6c16 6f5c 4ae7 46a9 2d153ffcbc63 expire time 2017 09 09T22 20 43 448543132 04 00 explicit max ttl 0 id 70f97438 e174 c03c 40fe 6bcdc1028d6c issue time 2017 08 08T22 20 43 448543003 04 00 meta map username testuser num uses 0 orphan true path auth userpass login testuser policies default duo policy renewable true ttl 2764623 1 Login as the user text vault login 70f97438 e174 c03c 40fe 6bcdc1028d6c 1 Read a secret to trigger a Duo push This will be a blocking call until the push notification is either approved or declined text vault read secret foo Key Value refresh interval 768h data which can only be read after MFA validation |
vault page title PingID MFA MFA Support Vault Enterprise include alerts enterprise only mdx Vault Enterprise supports PingID MFA type layout docs PingID MFA | ---
layout: docs
page_title: PingID MFA - MFA Support - Vault Enterprise
description: Vault Enterprise supports PingID MFA type.
---
# PingID MFA
@include 'alerts/enterprise-only.mdx'
This page demonstrates PingID MFA on ACL'd paths of Vault.
## Configuration
1. Enable the appropriate auth method:
```text
$ vault auth enable userpass
```
1. Fetch the mount accessor for the enabled auth method:
```text
$ vault auth list -detailed
```
The response will look like:
```text
Path Type Accessor Plugin Default TTL Max TTL Replication Description
---- ---- -------- ------ ----------- ------- ----------- -----------
token/ token auth_token_289703e9 n/a system system replicated token based credentials
userpass/ userpass auth_userpass_54b8e339 n/a system system replicated n/a
```
1. Configure PingID MFA:
```text
$ vault write sys/mfa/method/pingid/ping \
mount_accessor=auth_userpass_54b8e339 \
settings_file_base64="AABDwWaR..."
```
1. Create a policy that gives access to secret through the MFA method created
above:
```
$ vault policy write ping-policy -<<EOF
path "secret/foo" {
capabilities = ["read"]
mfa_methods = ["ping"]
}
EOF
```
1. Create a user. MFA works only for tokens that have identity information on
them. Tokens created by logging in using auth methods will have the associated
identity information. Create a user in the `userpass` auth method and
authenticate against it:
```text
$ vault write auth/userpass/users/testuser \
password=testpassword \
policies=ping-policy
```
1. Create a login token:
```text
$ vault write auth/userpass/login/testuser password=testpassword
Key Value
--- -----
token 70f97438-e174-c03c-40fe-6bcdc1028d6c
token_accessor a91d97f4-1c7d-6af3-e4bf-971f74f9fab9
token_duration 768h0m0s
token_renewable true
token_policies [default ping-policy]
token_meta_username "testuser"
```
Note that the CLI is not authenticated with the newly created token yet, we
did not call `vault login`, instead we used the login API to simply return a
token.
1. Fetch the entity ID from the token. The caller identity is represented by the
`entity_id` property of the token:
```text
$ vault token lookup 70f97438-e174-c03c-40fe-6bcdc1028d6c
Key Value
--- -----
accessor a91d97f4-1c7d-6af3-e4bf-971f74f9fab9
creation_time 1502245243
creation_ttl 2764800
display_name userpass-testuser
entity_id 307d6c16-6f5c-4ae7-46a9-2d153ffcbc63
expire_time 2017-09-09T22:20:43.448543132-04:00
explicit_max_ttl 0
id 70f97438-e174-c03c-40fe-6bcdc1028d6c
issue_time 2017-08-08T22:20:43.448543003-04:00
meta map[username:testuser]
num_uses 0
orphan true
path auth/userpass/login/testuser
policies [default ping-policy]
renewable true
ttl 2764623
```
1. Login as the user:
```text
$ vault login 70f97438-e174-c03c-40fe-6bcdc1028d6c
```
1. Read a secret to trigger a PingID push. This will be a blocking call until
the push notification is either approved or declined:
```text
$ vault read secret/foo
Key Value
--- -----
refresh_interval 768h
data which can only be read after MFA validation
``` | vault | layout docs page title PingID MFA MFA Support Vault Enterprise description Vault Enterprise supports PingID MFA type PingID MFA include alerts enterprise only mdx This page demonstrates PingID MFA on ACL d paths of Vault Configuration 1 Enable the appropriate auth method text vault auth enable userpass 1 Fetch the mount accessor for the enabled auth method text vault auth list detailed The response will look like text Path Type Accessor Plugin Default TTL Max TTL Replication Description token token auth token 289703e9 n a system system replicated token based credentials userpass userpass auth userpass 54b8e339 n a system system replicated n a 1 Configure PingID MFA text vault write sys mfa method pingid ping mount accessor auth userpass 54b8e339 settings file base64 AABDwWaR 1 Create a policy that gives access to secret through the MFA method created above vault policy write ping policy EOF path secret foo capabilities read mfa methods ping EOF 1 Create a user MFA works only for tokens that have identity information on them Tokens created by logging in using auth methods will have the associated identity information Create a user in the userpass auth method and authenticate against it text vault write auth userpass users testuser password testpassword policies ping policy 1 Create a login token text vault write auth userpass login testuser password testpassword Key Value token 70f97438 e174 c03c 40fe 6bcdc1028d6c token accessor a91d97f4 1c7d 6af3 e4bf 971f74f9fab9 token duration 768h0m0s token renewable true token policies default ping policy token meta username testuser Note that the CLI is not authenticated with the newly created token yet we did not call vault login instead we used the login API to simply return a token 1 Fetch the entity ID from the token The caller identity is represented by the entity id property of the token text vault token lookup 70f97438 e174 c03c 40fe 6bcdc1028d6c Key Value accessor a91d97f4 1c7d 6af3 e4bf 971f74f9fab9 creation time 1502245243 creation ttl 2764800 display name userpass testuser entity id 307d6c16 6f5c 4ae7 46a9 2d153ffcbc63 expire time 2017 09 09T22 20 43 448543132 04 00 explicit max ttl 0 id 70f97438 e174 c03c 40fe 6bcdc1028d6c issue time 2017 08 08T22 20 43 448543003 04 00 meta map username testuser num uses 0 orphan true path auth userpass login testuser policies default ping policy renewable true ttl 2764623 1 Login as the user text vault login 70f97438 e174 c03c 40fe 6bcdc1028d6c 1 Read a secret to trigger a PingID push This will be a blocking call until the push notification is either approved or declined text vault read secret foo Key Value refresh interval 768h data which can only be read after MFA validation |
vault page title Okta MFA MFA Support Vault Enterprise include alerts enterprise only mdx Vault Enterprise supports Okta MFA type layout docs Okta MFA | ---
layout: docs
page_title: Okta MFA - MFA Support - Vault Enterprise
description: Vault Enterprise supports Okta MFA type.
---
# Okta MFA
@include 'alerts/enterprise-only.mdx'
This page demonstrates the Okta MFA on ACL'd paths of Vault.
## Configuration
1. Enable the appropriate auth method:
```text
$ vault auth enable userpass
```
1. Fetch the mount accessor for the enabled auth method:
```text
$ vault auth list -detailed
```
The response will look like:
```text
Path Type Accessor Plugin Default TTL Max TTL Replication Description
---- ---- -------- ------ ----------- ------- ----------- -----------
token/ token auth_token_289703e9 n/a system system replicated token based credentials
userpass/ userpass auth_userpass_54b8e339 n/a system system replicated n/a
```
1. Configure Okta MFA:
```text
$ vault write sys/mfa/method/okta/my_okta \
mount_accessor=auth_userpass_54b8e339 \
org_name="dev-262775" \
api_token="0071u8PrReNkzmATGJAP2oDyIXwwveqx9vIOEyCZDC"
```
1. Create a policy that gives access to secret through the MFA method created
above:
```text
$ vault policy write okta-policy -<<EOF
path "secret/foo" {
capabilities = ["read"]
mfa_methods = ["my_okta"]
}
EOF
```
1. Create a user. MFA works only for tokens that have identity information on
them. Tokens created by logging in using auth methods will have the associated
identity information. Create a user in the `userpass` auth method and
authenticate against it:
```text
$ vault write auth/userpass/users/testuser \
password=testpassword \
policies=okta-policy
```
1. Create a login token:
```text
$ vault write auth/userpass/login/testuser password=testpassword
Key Value
--- -----
token 70f97438-e174-c03c-40fe-6bcdc1028d6c
token_accessor a91d97f4-1c7d-6af3-e4bf-971f74f9fab9
token_duration 768h0m0s
token_renewable true
token_policies [default okta-policy]
token_meta_username "testuser"
```
Note that the CLI is not authenticated with the newly created token yet, we
did not call `vault login`, instead we used the login API to simply return a
token.
1. Fetch the entity ID from the token. The caller identity is represented by the
`entity_id` property of the token:
```text
$ vault token lookup 70f97438-e174-c03c-40fe-6bcdc1028d6c
Key Value
--- -----
accessor a91d97f4-1c7d-6af3-e4bf-971f74f9fab9
creation_time 1502245243
creation_ttl 2764800
display_name userpass-testuser
entity_id 307d6c16-6f5c-4ae7-46a9-2d153ffcbc63
expire_time 2017-09-09T22:20:43.448543132-04:00
explicit_max_ttl 0
id 70f97438-e174-c03c-40fe-6bcdc1028d6c
issue_time 2017-08-08T22:20:43.448543003-04:00
meta map[username:testuser]
num_uses 0
orphan true
path auth/userpass/login/testuser
policies [default okta-policy]
renewable true
ttl 2764623
```
1. Login as the user:
```text
$ vault login 70f97438-e174-c03c-40fe-6bcdc1028d6c
```
1. Read a secret to trigger an Okta push. This will be a blocking call until
the push notification is either approved or declined:
```text
$ vault read secret/foo
Key Value
--- -----
refresh_interval 768h0m0s
data which can only be read after MFA validation
``` | vault | layout docs page title Okta MFA MFA Support Vault Enterprise description Vault Enterprise supports Okta MFA type Okta MFA include alerts enterprise only mdx This page demonstrates the Okta MFA on ACL d paths of Vault Configuration 1 Enable the appropriate auth method text vault auth enable userpass 1 Fetch the mount accessor for the enabled auth method text vault auth list detailed The response will look like text Path Type Accessor Plugin Default TTL Max TTL Replication Description token token auth token 289703e9 n a system system replicated token based credentials userpass userpass auth userpass 54b8e339 n a system system replicated n a 1 Configure Okta MFA text vault write sys mfa method okta my okta mount accessor auth userpass 54b8e339 org name dev 262775 api token 0071u8PrReNkzmATGJAP2oDyIXwwveqx9vIOEyCZDC 1 Create a policy that gives access to secret through the MFA method created above text vault policy write okta policy EOF path secret foo capabilities read mfa methods my okta EOF 1 Create a user MFA works only for tokens that have identity information on them Tokens created by logging in using auth methods will have the associated identity information Create a user in the userpass auth method and authenticate against it text vault write auth userpass users testuser password testpassword policies okta policy 1 Create a login token text vault write auth userpass login testuser password testpassword Key Value token 70f97438 e174 c03c 40fe 6bcdc1028d6c token accessor a91d97f4 1c7d 6af3 e4bf 971f74f9fab9 token duration 768h0m0s token renewable true token policies default okta policy token meta username testuser Note that the CLI is not authenticated with the newly created token yet we did not call vault login instead we used the login API to simply return a token 1 Fetch the entity ID from the token The caller identity is represented by the entity id property of the token text vault token lookup 70f97438 e174 c03c 40fe 6bcdc1028d6c Key Value accessor a91d97f4 1c7d 6af3 e4bf 971f74f9fab9 creation time 1502245243 creation ttl 2764800 display name userpass testuser entity id 307d6c16 6f5c 4ae7 46a9 2d153ffcbc63 expire time 2017 09 09T22 20 43 448543132 04 00 explicit max ttl 0 id 70f97438 e174 c03c 40fe 6bcdc1028d6c issue time 2017 08 08T22 20 43 448543003 04 00 meta map username testuser num uses 0 orphan true path auth userpass login testuser policies default okta policy renewable true ttl 2764623 1 Login as the user text vault login 70f97438 e174 c03c 40fe 6bcdc1028d6c 1 Read a secret to trigger an Okta push This will be a blocking call until the push notification is either approved or declined text vault read secret foo Key Value refresh interval 768h0m0s data which can only be read after MFA validation |
vault page title Sentinel Examples An overview of how Sentinel interacts with Vault Enterprise layout docs include alerts enterprise and hcp mdx Examples | ---
layout: docs
page_title: Sentinel Examples
description: An overview of how Sentinel interacts with Vault Enterprise.
---
# Examples
@include 'alerts/enterprise-and-hcp.mdx'
Following are some examples that help to introduce concepts. If you are
unfamiliar with writing Sentinel policies in Vault, please read through to
understand some best practices.
Additional examples can be found [here](https://github.com/hashicorp/vault-guides/tree/master/governance).
## MFA and CIDR check on login
The following Sentinel policy requires the incoming user to successfully
validate with an Okta MFA push request before authenticating with LDAP.
Additionally, it ensures that only users on the 10.20.0.0/16 subnet are able to
authenticate using LDAP.
```python
import "sockaddr"
import "mfa"
import "strings"
# We expect logins to come only from our private IP range
cidrcheck = rule {
sockaddr.is_contained("10.20.0.0/16", request.connection.remote_addr)
}
# Require ping MFA validation to succeed
ping_valid = rule {
mfa.methods.ping.valid
}
main = rule when strings.has_prefix(request.path, "auth/ldap/login") {
ping_valid and cidrcheck
}
```
Note the `rule when` construct on the `main` rule. This scopes the policy to
the given condition.
Vault takes a default-deny approach to security. Without such scoping, because
active Sentinel policies must all pass successfully, the user would be forced
to start with a passing status and then define the conditions under which
access is denied, breaking the default-deny concept.
By instead indicating the conditions under which the `main` rule (and thus, in
this example, the entire policy) should be evaluated, the policy instead
describes the conditions under which a matching request is successful. This
keeps the default-deny feeling of Vault; if the evaluation condition isn't met,
the policy is simply a no-op.
## Allow only specific identity entities or groups
```python
main = rule {
identity.entity.name is "jeff" or
identity.entity.id is "fe2a5bfd-c483-9263-b0d4-f9d345efdf9f" or
"sysops" in identity.groups.names or
"14c0940a-5c07-4b97-81ec-0d423accb8e0" in keys(identity.groups.by_id)
}
```
This example shows accessing Identity properties to make decisions, showing
that for Identity values IDs or names can be used for reference.
In general, it is more secure to use IDs. While convenient, entity names and
group names can be switched from one entity to another, because their only
constraint is that they must be unique. Using IDs guarantees that only that
specific entity or group is sufficient; if the group or entity are deleted and
recreated with the same name, the match will fail.
## Instantly disallow all Previously-Generated tokens
Imagine a break-glass scenario where it is discovered that there have been
compromises of some unknown number of previously-generated tokens.
In such a situation it would be possible to revoke all previous tokens, but
this may take a while for a number of reasons, from requiring revocation of
generated secrets to the simple delay required to remove many entries from
storage. In addition, it could revoke tokens and generated secrets that later
forensic analysis shows were not compromised, unnecessarily widening the impact
of the mass revocation.
In Vault's ACL system a simple deny could be put into place, but this is a very
coarse-grained control and would require forethought to ensure that a policy
that can be modified in such a way is attached to every token. It also would
not prevent access to login paths or other unauthenticated paths.
Sentinel offers much more fine-grained control:
```python
import "time"
main = rule when not request.unauthenticated {
time.load(token.creation_time).unix >
time.load("2017-09-17T13:25:29Z").unix
}
```
Created as an EGP on `*`, this will block all access to any path Sentinel
operates on with a token created before the given time. Tokens created after
this time, since they were not a part of the compromise, will not be subject to
this restriction.
## Delegate EGP policy management under a path
The following policy gives token holders with this policy (via their tokens or
their Identity entities/groups) the ability to write EGP policies that can only
take effect at Vault paths below certain prefixes. This effectively delegates
policy management to the team for their own key-value spaces.
```python
import "strings"
data_match = func() {
# Make sure there is request data
if length(request.data else 0) is 0 {
return false
}
# Make sure request data includes paths
if length(request.data.paths else 0) is 0 {
return false
}
# For each path, verify that it is in the allowed list
for strings.split(request.data.paths, ",") as path {
# Make it easier for users who might be used to starting paths with
# slashes
sanitizedPath = strings.trim_prefix(path, "/")
if not strings.has_prefix(sanitizedPath, "dev-kv/teama/") and
not strings.has_prefix(sanitizedPath, "prod-kv/teama/") {
return false
}
}
return true
}
# Only care about writing; reading can be allowed by normal ACLs
precond = rule {
request.operation in ["create", "update"] and
strings.has_prefix(request.path, "sys/policies/egp/")
}
main = rule when precond {
strings.has_prefix(request.path, "sys/policies/egp/teama-") and data_match()
}
``` | vault | layout docs page title Sentinel Examples description An overview of how Sentinel interacts with Vault Enterprise Examples include alerts enterprise and hcp mdx Following are some examples that help to introduce concepts If you are unfamiliar with writing Sentinel policies in Vault please read through to understand some best practices Additional examples can be found here https github com hashicorp vault guides tree master governance MFA and CIDR check on login The following Sentinel policy requires the incoming user to successfully validate with an Okta MFA push request before authenticating with LDAP Additionally it ensures that only users on the 10 20 0 0 16 subnet are able to authenticate using LDAP python import sockaddr import mfa import strings We expect logins to come only from our private IP range cidrcheck rule sockaddr is contained 10 20 0 0 16 request connection remote addr Require ping MFA validation to succeed ping valid rule mfa methods ping valid main rule when strings has prefix request path auth ldap login ping valid and cidrcheck Note the rule when construct on the main rule This scopes the policy to the given condition Vault takes a default deny approach to security Without such scoping because active Sentinel policies must all pass successfully the user would be forced to start with a passing status and then define the conditions under which access is denied breaking the default deny concept By instead indicating the conditions under which the main rule and thus in this example the entire policy should be evaluated the policy instead describes the conditions under which a matching request is successful This keeps the default deny feeling of Vault if the evaluation condition isn t met the policy is simply a no op Allow only specific identity entities or groups python main rule identity entity name is jeff or identity entity id is fe2a5bfd c483 9263 b0d4 f9d345efdf9f or sysops in identity groups names or 14c0940a 5c07 4b97 81ec 0d423accb8e0 in keys identity groups by id This example shows accessing Identity properties to make decisions showing that for Identity values IDs or names can be used for reference In general it is more secure to use IDs While convenient entity names and group names can be switched from one entity to another because their only constraint is that they must be unique Using IDs guarantees that only that specific entity or group is sufficient if the group or entity are deleted and recreated with the same name the match will fail Instantly disallow all Previously Generated tokens Imagine a break glass scenario where it is discovered that there have been compromises of some unknown number of previously generated tokens In such a situation it would be possible to revoke all previous tokens but this may take a while for a number of reasons from requiring revocation of generated secrets to the simple delay required to remove many entries from storage In addition it could revoke tokens and generated secrets that later forensic analysis shows were not compromised unnecessarily widening the impact of the mass revocation In Vault s ACL system a simple deny could be put into place but this is a very coarse grained control and would require forethought to ensure that a policy that can be modified in such a way is attached to every token It also would not prevent access to login paths or other unauthenticated paths Sentinel offers much more fine grained control python import time main rule when not request unauthenticated time load token creation time unix time load 2017 09 17T13 25 29Z unix Created as an EGP on this will block all access to any path Sentinel operates on with a token created before the given time Tokens created after this time since they were not a part of the compromise will not be subject to this restriction Delegate EGP policy management under a path The following policy gives token holders with this policy via their tokens or their Identity entities groups the ability to write EGP policies that can only take effect at Vault paths below certain prefixes This effectively delegates policy management to the team for their own key value spaces python import strings data match func Make sure there is request data if length request data else 0 is 0 return false Make sure request data includes paths if length request data paths else 0 is 0 return false For each path verify that it is in the allowed list for strings split request data paths as path Make it easier for users who might be used to starting paths with slashes sanitizedPath strings trim prefix path if not strings has prefix sanitizedPath dev kv teama and not strings has prefix sanitizedPath prod kv teama return false return true Only care about writing reading can be allowed by normal ACLs precond rule request operation in create update and strings has prefix request path sys policies egp main rule when precond strings has prefix request path sys policies egp teama and data match |
vault Properties page title Sentinel Properties An overview of how Sentinel interacts with Vault Enterprise layout docs include alerts enterprise and hcp mdx | ---
layout: docs
page_title: Sentinel Properties
description: An overview of how Sentinel interacts with Vault Enterprise.
---
# Properties
@include 'alerts/enterprise-and-hcp.mdx'
Vault injects a rich set of data into the running Sentinel environment,
allowing for very fine-grained controls. The set of available properties are
enumerated on this page.
The following properties are available for use in Sentinel policies.
## Namespace properties
The `namespace` (Sentinel) namespace gives access to information about the
namespace in which the request is running. (This may or may not match the
client's chosen namespace, if a request reaches into a child namespace).
| Name | Type | Description |
| :----- | :------- | :----------------------------- |
| `id` | `string` | The namespace ID |
| `path` | `string` | The root path of the namespace |
## Request properties
The following properties are available in the `request` namespace.
| Name | Type | Description |
| :----------------------- | :-------------------- | :------------------------------------------------------------------------------------------ |
| `connection.remote_addr` | `string` | TCP/IP source address of the client |
| `data` | `map (string -> any)` | Raw request data |
| `operation` | `string` | Operation type, e.g. "read" or "update" |
| `path` | `string` | Path, with any leading `/` trimmed |
| `policy_override` | `bool` | `true` if a `soft-mandatory` policy override was requested |
| `unauthenticated` | `bool` | `true` if the requested path is an unauthenticated path |
| `wrapping.ttl` | `duration` | The requested response-wrapping TTL in nanoseconds, suitable for use with the `time` import |
| `wrapping.ttl_seconds` | `int` | The requested response-wrapping TTL in seconds |
### Replication properties
The following properties exists at the `replication` namespace.
| Name | Type | Description |
| :------------ | :------- | :------------------------------------------------------------------------------------------------------------- |
| `dr.mode` | `string` | The state of DR replication. Valid values are "disabled", "bootstrapping", "primary", and "secondary" |
| `performance.mode` | `string` | The state of performance replication. Valid values are "disabled", "bootstrapping", "primary", and "secondary" |
## Token properties
The following properties, if available, are in the `token` namespace. The
namespace will not exist if there is no token information attached to a
request, e.g. when logging in.
| Name | Type | Description |
| :------------------------- | :----------------------- | :--------------------------------------------------------------------------------------------------------------------------------- |
| `creation_time` | `string` | The timestamp of the token's creation, in RFC3339 format |
| `creation_time_unix` | `int` | The timestamp of the token's creation, in seconds since Unix epoch UTC |
| `creation_ttl` | `duration` | The TTL the token was first created with in nanoseconds, suitable for use with the `time` import |
| `creation_ttl_seconds` | `int` | The TTL the token was first created with in seconds |
| `display_name` | `string` | The display name set on the token, if any |
| `entity_id` | `string` | The Identity entity ID attached to the token, if any |
| `explicit_max_ttl` | `duration` | If the token has an explicit max TTL, the duration of the explicit max TTL in nanoseconds, suitable for use with the `time` import |
| `explicit_max_ttl_seconds` | `int` | If the token has an explicit max TTL, the duration of the explicit max TTL in seconds |
| `metadata` | `map (string -> string)` | Metadata set on the token |
| `num_uses` | `int` | The number of uses remaining on a use-count-limited token; 0 if the token has no use-count limit |
| `path` | `string` | The request path that resulted in creation of this token |
| `period` | `duration` | If the token has a period, the duration of the period in nanoseconds, suitable for use with the `time` import |
| `period_seconds` | `int` | If the token has a period, the duration of the period in seconds |
| `policies` | `list (string)` | Policies directly attached to the token |
| `role` | `string` | If created via a token role, the role that created the token |
| `type` | `string` | The type of token, currently will be either `batch` or `service` |
## Token namespace properties
The following properties, if available, are in the `token.namespace` namespace.
The (Sentinel) namespace will not exist if there is no token information attached to a
request, e.g. when logging in.
| Name | Type | Description |
| :----- | :------- | :----------------------------- |
| `id` | `string` | The namespace ID |
| `path` | `string` | The root path of the namespace |
## Identity properties
The following properties, if available, are in the `identity` namespace. The
namespace may not exist if there is no token information attached to the
request; however, at login time the user's request data will be used to attempt
to find any existing Identity information, or create some information to pass
to MFA functions.
### Entity properties
These exist at the `identity.entity` namespace.
| Name | Type | Description |
| :------------------ | :----------------------- | :------------------------------------------------------------ |
| `creation_time` | `string` | The entity's creation time in RFC3339 format |
| `id` | `string` | The entity's ID |
| `last_update_time` | `string` | The entity's last update (modify) time in RFC3339 format |
| `metadata` | `map (string -> string)` | Metadata associated with the entity |
| `name` | `string` | The entity's name |
| `merged_entity_ids` | `list (string)` | A list of IDs of entities that have been merged into this one |
| `aliases` | `list (alias)` | List of aliases associated with this entity |
| `policies` | `list (string)` | List of the policies set on this entity |
### Alias properties
These can be retrieved from `identity.entity.aliases`.
| Name | Type | Description |
| :----------------------- | :----------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------- |
| `creation_time` | `string` | The alias's creation time in RFC3339 format |
| `id` | `string` | The alias's ID |
| `last_update_time` | `string` | The alias's last update (modify) time in RFC3339 format |
| `metadata` | `map (string -> string)` | Metadata associated with the alias
| `custom_metadata` | `map (string -> string)` | Custom metadata associated with the alias |
| `merged_from_entity_ids` | `list (string)` | If this alias was attached to the current entity via one or more merges, the original entity/entities will be in this list |
| `mount_accessor` | `string` | The immutable accessor of the mount that created this alias |
| `mount_path` | `string` | The path of the mount that created this alias; unlike the accessor, there is no guarantee that the current path represents the original mount |
| `mount_type` | `string` | The type of the mount that created this alias |
| `name` | `string` | The alias's name |
### Groups properties
These exist at the `identity.groups` namespace.
| Name | Type | Description |
| :-------- | :---------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------- |
| `by_id` | `map (string -> group)` | A map of group ID to group information |
| `by_name` | `map (string -> group)` | A map of group name to group information; unlike the group ID, there is no guarantee that the current name will always represent the same group |
### Group properties
These can be retrieved from the `identity.groups` maps.
| Name | Type | Description |
| :------------------ | :----------------------- | :----------------------------------------------------------------- |
| `creation_time` | `string` | The group's creation time in RFC3339 format |
| `id` | `string` | The group's ID |
| `last_update_time` | `string` | The group's last update (modify) time in RFC3339 format |
| `metadata` | `map (string -> string)` | Metadata associated with the group |
| `name` | `string` | The group's name |
| `member_entity_ids` | `list (string)` | A list of IDs of entities that are directly assigned to this group |
| `parent_group_ids` | `list (string)` | A list of IDs of groups that are parents of this group |
| `policies` | `list (string)` | List of the policies set on this group |
## MFA properties
These properties exist at the `mfa` namespace.
| Name | Type | Description |
| :-------- | :----------------------- | :---------------------------------------- |
| `methods` | `map (string -> method)` | A map of method name to method properties |
### MFA method properties
These properties can be accessed via the `mfa.methods` selector.
| Name | Type | Description |
| :------ | :----- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `valid` | `bool` | Whether the method has successfully been validated; if validation has not been attempted, this will trigger the validation attempt. The result of the validation attempt will be used for this method for all policies for the given request. |
## Control group properties
These properties exist at the `controlgroup` namespace.
| Name | Type | Description |
| :--------------------- | :--------------------- | :------------------------------------------ |
| `time`, `request_time` | `string` | The original request time in RFC3339 format |
| `authorizations` | `list (authorization)` | List of control group authorizations |
### Control group authorization
These properties can be accessed via the `controlgroup.authorizations` selector.
| Name | Type | Description |
| :------- | :---------------- | :--------------------------------------------------------- |
| `time` | `string` | The authorization time in RFC3339 format |
| `entity` | `identity.entity` | The identity entity for the authorizer. |
| `groups` | `identity.groups` | The map of identity groups associated with the authorizer. | | vault | layout docs page title Sentinel Properties description An overview of how Sentinel interacts with Vault Enterprise Properties include alerts enterprise and hcp mdx Vault injects a rich set of data into the running Sentinel environment allowing for very fine grained controls The set of available properties are enumerated on this page The following properties are available for use in Sentinel policies Namespace properties The namespace Sentinel namespace gives access to information about the namespace in which the request is running This may or may not match the client s chosen namespace if a request reaches into a child namespace Name Type Description id string The namespace ID path string The root path of the namespace Request properties The following properties are available in the request namespace Name Type Description connection remote addr string TCP IP source address of the client data map string any Raw request data operation string Operation type e g read or update path string Path with any leading trimmed policy override bool true if a soft mandatory policy override was requested unauthenticated bool true if the requested path is an unauthenticated path wrapping ttl duration The requested response wrapping TTL in nanoseconds suitable for use with the time import wrapping ttl seconds int The requested response wrapping TTL in seconds Replication properties The following properties exists at the replication namespace Name Type Description dr mode string The state of DR replication Valid values are disabled bootstrapping primary and secondary performance mode string The state of performance replication Valid values are disabled bootstrapping primary and secondary Token properties The following properties if available are in the token namespace The namespace will not exist if there is no token information attached to a request e g when logging in Name Type Description creation time string The timestamp of the token s creation in RFC3339 format creation time unix int The timestamp of the token s creation in seconds since Unix epoch UTC creation ttl duration The TTL the token was first created with in nanoseconds suitable for use with the time import creation ttl seconds int The TTL the token was first created with in seconds display name string The display name set on the token if any entity id string The Identity entity ID attached to the token if any explicit max ttl duration If the token has an explicit max TTL the duration of the explicit max TTL in nanoseconds suitable for use with the time import explicit max ttl seconds int If the token has an explicit max TTL the duration of the explicit max TTL in seconds metadata map string string Metadata set on the token num uses int The number of uses remaining on a use count limited token 0 if the token has no use count limit path string The request path that resulted in creation of this token period duration If the token has a period the duration of the period in nanoseconds suitable for use with the time import period seconds int If the token has a period the duration of the period in seconds policies list string Policies directly attached to the token role string If created via a token role the role that created the token type string The type of token currently will be either batch or service Token namespace properties The following properties if available are in the token namespace namespace The Sentinel namespace will not exist if there is no token information attached to a request e g when logging in Name Type Description id string The namespace ID path string The root path of the namespace Identity properties The following properties if available are in the identity namespace The namespace may not exist if there is no token information attached to the request however at login time the user s request data will be used to attempt to find any existing Identity information or create some information to pass to MFA functions Entity properties These exist at the identity entity namespace Name Type Description creation time string The entity s creation time in RFC3339 format id string The entity s ID last update time string The entity s last update modify time in RFC3339 format metadata map string string Metadata associated with the entity name string The entity s name merged entity ids list string A list of IDs of entities that have been merged into this one aliases list alias List of aliases associated with this entity policies list string List of the policies set on this entity Alias properties These can be retrieved from identity entity aliases Name Type Description creation time string The alias s creation time in RFC3339 format id string The alias s ID last update time string The alias s last update modify time in RFC3339 format metadata map string string Metadata associated with the alias custom metadata map string string Custom metadata associated with the alias merged from entity ids list string If this alias was attached to the current entity via one or more merges the original entity entities will be in this list mount accessor string The immutable accessor of the mount that created this alias mount path string The path of the mount that created this alias unlike the accessor there is no guarantee that the current path represents the original mount mount type string The type of the mount that created this alias name string The alias s name Groups properties These exist at the identity groups namespace Name Type Description by id map string group A map of group ID to group information by name map string group A map of group name to group information unlike the group ID there is no guarantee that the current name will always represent the same group Group properties These can be retrieved from the identity groups maps Name Type Description creation time string The group s creation time in RFC3339 format id string The group s ID last update time string The group s last update modify time in RFC3339 format metadata map string string Metadata associated with the group name string The group s name member entity ids list string A list of IDs of entities that are directly assigned to this group parent group ids list string A list of IDs of groups that are parents of this group policies list string List of the policies set on this group MFA properties These properties exist at the mfa namespace Name Type Description methods map string method A map of method name to method properties MFA method properties These properties can be accessed via the mfa methods selector Name Type Description valid bool Whether the method has successfully been validated if validation has not been attempted this will trigger the validation attempt The result of the validation attempt will be used for this method for all policies for the given request Control group properties These properties exist at the controlgroup namespace Name Type Description time request time string The original request time in RFC3339 format authorizations list authorization List of control group authorizations Control group authorization These properties can be accessed via the controlgroup authorizations selector Name Type Description time string The authorization time in RFC3339 format entity identity entity The identity entity for the authorizer groups identity groups The map of identity groups associated with the authorizer |
vault page title Vault Enterprise Sentinel Integration An overview of how Sentinel interacts with Vault Enterprise Vault Enterprise and Sentinel integration layout docs include alerts enterprise and hcp mdx | ---
layout: docs
page_title: Vault Enterprise Sentinel Integration
description: An overview of how Sentinel interacts with Vault Enterprise.
---
# Vault Enterprise and Sentinel integration
@include 'alerts/enterprise-and-hcp.mdx'
Vault Enterprise integrates HashiCorp Sentinel to provide a rich set of access
control functionality. Because Vault is a security-focused product trusted with
high-risk secrets and assets, and because of its default-deny stance,
integration with Vault is implemented in a defense-in-depth fashion. This takes
the form of multiple types of policies and a fixed evaluation order.
## Policy types
Vault's policy system has been expanded to support three types of policies:
- `ACLs` - These are the [traditional Vault
policies](/vault/docs/concepts/policies) and remain unchanged.
- `Role Governing Policies (RGPs)` - RGPs are Sentinel policies that are tied
to particular tokens, Identity entities, or Identity groups. They have access
to a rich set of controls across various aspects of Vault.
- `Endpoint Governing Policies (EGPs)` - EGPs are Sentinel policies that are
tied to particular paths instead of tokens. They have access to as much
request information as possible, but they can take effect even on
unauthenticated paths, such as login paths.
Not every unauthenticated path supports EGPs. For instance, the paths related
to root token generation cannot support EGPs because it's already the mechanism
of last resort if, for instance, all clients are locked out of Vault due to
misconfigured EGPs.
Like with ACLs, [root tokens](/vault/docs/concepts/tokens#root-tokens)
are not subject to Sentinel policy checks.
Sentinel execution should be considered to be significantly slower than normal
ACL policy checking. If high performance is needed, testing should be performed
appropriately when introducing Sentinel policies.
### Policy enforcement levels
Sentinel policies have three enforcement levels to choose from.
| Level | Description |
| -------------- | --------------------------------------------------------------------------- |
| advisory | The policy is allowed to fail. Can be used as a tool to educate new users. |
| soft-mandatory | The policy must pass unless an [override](#policy-overriding) is specified. |
| hard-mandatory | The policy must pass. |
## Policy evaluation
Vault evaluates incoming requests against policies of all types that are
applicable.


1. If the request is unauthenticated, skip to evaluating the EGPs. Otherwise,
evaluate the token's ACL policies. These must grant access; as always, a
failure to be granted capabilities on a path via ACL policies denies the
request.
2. Evaluate RGPs attached to the client token. All policies must pass according
to their enforcement level.
3. Evaluate EGPs set on the requested path, and any prefix-matching EGPs set on
less-specific paths, are evaluated. All policies must pass according to their
enforcement level.
Any failure at any of these steps results in a denied request.
### RGPs and namespaces
Policies, auth methods, secrets engines, and tokens are locked into the
[namespace](/vault/docs/enterprise/namespaces) they are created in. However,
identity groups can pull in entities and groups from other namespaces.
<Tip>
Refer to the [Set up entities and groups section of the Secure Multi-Tenancy
with Namespaces
tutorial](/vault/tutorials/enterprise/namespaces#set-up-entities-and-groups) for
a step-by-step instruction.
</Tip>
<Warning>
As of the following versions, Vault only applies RPGs derived from identity
group membership to entities in child namespaces:
- `1.15.0+`
- `1.14.4+`
- `1.13.8+`
</Warning>
The scenarios below describe the relevant changes in more detail.
#### Versions 1.15.0, 1.14.4, 1.13.8, and later
The training namespace is a child namespace of the education namespace. The "Sun
Shine" entity created in the training namespace is a member of the "Tester"
group which is defined in the education namespace. The group members inherit the
group-level policy.


#### Versions 1.15.0-rc1, 1.14.3, 1.13.7, and earlier
The training namespace is a child namespace of the education namespace. The "Sun
Shine" entity created in the education namespace is a member of the "Tester"
group which is defined in the training namespace. The group members inherit the
group-level policy.


While ACL policies and EGPs set rules on a specific path, an RGP does not
specify a target path. RGPs are tied to tokens, identity entities, or identity
groups that you can write rules without specifying a path.
What if the deny-all RGP in the training namespace looked like:
<CodeBlockConfig filename="deny-all.sentinel">
```hcl
precond = rule {
identity.entity.metadata.org_id is "A012345X"
}
main = rule when precond {
false
}
```
</CodeBlockConfig>
Vault checks the requesting token's entity metadata. If the `org_id` metadata
exists and the value is `A012345X`, the request gets denied because the
enforcement level is hard-mandatory. It does not matter if the request invokes a
path starts with `/education` or `/education/training`, or even `/foo` because
there is no path associated with the deny-all RGP.
## Policy overriding
Vault supports normal Sentinel overriding behavior. Requests to override can be
specified on the command line via the `policy-override` flag or in HTTP
requests by setting the `X-Vault-Policy-Override` header to `true`.
Override requests are visible in Vault's audit log; in addition, override
requests and their eventual status (whether they ended up being required) are
logged as warnings in Vault's server logs.
## MFA
Sentinel policies support the [Identity-based MFA
system](/vault/docs/enterprise/mfa) in Vault Enterprise. Within a single
request, multiple checks of any named MFA method will only trigger
authentication behavior for that method once, regardless of whether its
validity is checked via ACLs, RGPs, or EGPs.
EGPs can be used to require MFA on otherwise unauthenticated paths, such as
login paths. On such paths, the request data will perform a lookahead to try to
discover the appropriate Identity information to use for MFA. It may be
necessary to pre-populate Identity entries or supply additional parameters with
the request if you require more information to use MFA than the endpoint is
able to glean from the original request alone.
# Using Sentinel
## Configuration
Sentinel policies can be configured via the `sys/policies/rgp/` and
`sys/policies/egp/` endpoints; see [the
documentation](/vault/api-docs/system/policies) for more information.
Once set, RGPs can be assigned to Identity entities and groups or to tokens
just like ACL policies. As a result, they cannot share names with ACL policies.
When setting an EGP, a list of paths must be provided specifying on which paths
that EGP should take effect. Endpoints can have multiple distinct EGPs set on
them; all are evaluated for each request. Paths can use a glob character (`*`)
as the last character of the path to perform a prefix match; a path that
consists only of a `*` will apply to the root of the API. Since requests are
subject to an EGPs exactly matching the requested path and any glob EGPs
sitting further up the request path, an EGP with a path of `*` will thus take
effect on all requests.
## Properties and examples
See the [Examples](/vault/docs/enterprise/sentinel/examples) page for examples
of Sentinel in action, and the
[Properties](/vault/docs/enterprise/sentinel/properties) page for detailed
property documentation.
## Tutorial
Refer to the [Sentinel Policies](/vault/tutorials/policies/sentinel)
tutorial to learn how to author Sentinel policies in Vault. | vault | layout docs page title Vault Enterprise Sentinel Integration description An overview of how Sentinel interacts with Vault Enterprise Vault Enterprise and Sentinel integration include alerts enterprise and hcp mdx Vault Enterprise integrates HashiCorp Sentinel to provide a rich set of access control functionality Because Vault is a security focused product trusted with high risk secrets and assets and because of its default deny stance integration with Vault is implemented in a defense in depth fashion This takes the form of multiple types of policies and a fixed evaluation order Policy types Vault s policy system has been expanded to support three types of policies ACLs These are the traditional Vault policies vault docs concepts policies and remain unchanged Role Governing Policies RGPs RGPs are Sentinel policies that are tied to particular tokens Identity entities or Identity groups They have access to a rich set of controls across various aspects of Vault Endpoint Governing Policies EGPs EGPs are Sentinel policies that are tied to particular paths instead of tokens They have access to as much request information as possible but they can take effect even on unauthenticated paths such as login paths Not every unauthenticated path supports EGPs For instance the paths related to root token generation cannot support EGPs because it s already the mechanism of last resort if for instance all clients are locked out of Vault due to misconfigured EGPs Like with ACLs root tokens vault docs concepts tokens root tokens are not subject to Sentinel policy checks Sentinel execution should be considered to be significantly slower than normal ACL policy checking If high performance is needed testing should be performed appropriately when introducing Sentinel policies Policy enforcement levels Sentinel policies have three enforcement levels to choose from Level Description advisory The policy is allowed to fail Can be used as a tool to educate new users soft mandatory The policy must pass unless an override policy overriding is specified hard mandatory The policy must pass Policy evaluation Vault evaluates incoming requests against policies of all types that are applicable Policy evaluation img diagram policy evaluation workflow light png light theme only Policy evaluation img diagram policy evaluation workflow dark png dark theme only 1 If the request is unauthenticated skip to evaluating the EGPs Otherwise evaluate the token s ACL policies These must grant access as always a failure to be granted capabilities on a path via ACL policies denies the request 2 Evaluate RGPs attached to the client token All policies must pass according to their enforcement level 3 Evaluate EGPs set on the requested path and any prefix matching EGPs set on less specific paths are evaluated All policies must pass according to their enforcement level Any failure at any of these steps results in a denied request RGPs and namespaces Policies auth methods secrets engines and tokens are locked into the namespace vault docs enterprise namespaces they are created in However identity groups can pull in entities and groups from other namespaces Tip Refer to the Set up entities and groups section of the Secure Multi Tenancy with Namespaces tutorial vault tutorials enterprise namespaces set up entities and groups for a step by step instruction Tip Warning As of the following versions Vault only applies RPGs derived from identity group membership to entities in child namespaces 1 15 0 1 14 4 1 13 8 Warning The scenarios below describe the relevant changes in more detail Versions 1 15 0 1 14 4 1 13 8 and later The training namespace is a child namespace of the education namespace The Sun Shine entity created in the training namespace is a member of the Tester group which is defined in the education namespace The group members inherit the group level policy Relationship img diagram rgp namespace post 115 light png light theme only Relationship img diagram rgp namespace post 115 dark png dark theme only Versions 1 15 0 rc1 1 14 3 1 13 7 and earlier The training namespace is a child namespace of the education namespace The Sun Shine entity created in the education namespace is a member of the Tester group which is defined in the training namespace The group members inherit the group level policy Relationship img diagram rgp namespace pre 115 light png light theme only Relationship img diagram rgp namespace pre 115 dark png dark theme only While ACL policies and EGPs set rules on a specific path an RGP does not specify a target path RGPs are tied to tokens identity entities or identity groups that you can write rules without specifying a path What if the deny all RGP in the training namespace looked like CodeBlockConfig filename deny all sentinel hcl precond rule identity entity metadata org id is A012345X main rule when precond false CodeBlockConfig Vault checks the requesting token s entity metadata If the org id metadata exists and the value is A012345X the request gets denied because the enforcement level is hard mandatory It does not matter if the request invokes a path starts with education or education training or even foo because there is no path associated with the deny all RGP Policy overriding Vault supports normal Sentinel overriding behavior Requests to override can be specified on the command line via the policy override flag or in HTTP requests by setting the X Vault Policy Override header to true Override requests are visible in Vault s audit log in addition override requests and their eventual status whether they ended up being required are logged as warnings in Vault s server logs MFA Sentinel policies support the Identity based MFA system vault docs enterprise mfa in Vault Enterprise Within a single request multiple checks of any named MFA method will only trigger authentication behavior for that method once regardless of whether its validity is checked via ACLs RGPs or EGPs EGPs can be used to require MFA on otherwise unauthenticated paths such as login paths On such paths the request data will perform a lookahead to try to discover the appropriate Identity information to use for MFA It may be necessary to pre populate Identity entries or supply additional parameters with the request if you require more information to use MFA than the endpoint is able to glean from the original request alone Using Sentinel Configuration Sentinel policies can be configured via the sys policies rgp and sys policies egp endpoints see the documentation vault api docs system policies for more information Once set RGPs can be assigned to Identity entities and groups or to tokens just like ACL policies As a result they cannot share names with ACL policies When setting an EGP a list of paths must be provided specifying on which paths that EGP should take effect Endpoints can have multiple distinct EGPs set on them all are evaluated for each request Paths can use a glob character as the last character of the path to perform a prefix match a path that consists only of a will apply to the root of the API Since requests are subject to an EGPs exactly matching the requested path and any glob EGPs sitting further up the request path an EGP with a path of will thus take effect on all requests Properties and examples See the Examples vault docs enterprise sentinel examples page for examples of Sentinel in action and the Properties vault docs enterprise sentinel properties page for detailed property documentation Tutorial Refer to the Sentinel Policies vault tutorials policies sentinel tutorial to learn how to author Sentinel policies in Vault |
vault Use namespaces to create isolated environments within Vault Enterprise Guidance for using thousands of namespaces with Vault Enterprise page title Run Vault Enterprise with many namespaces Run Vault Enterprise with many namespaces layout docs | ---
layout: docs
page_title: Run Vault Enterprise with many namespaces
description: >-
Guidance for using thousands of namespaces with Vault Enterprise
---
# Run Vault Enterprise with many namespaces
Use namespaces to create isolated environments within Vault Enterprise.
By default, Vault limits the number and depth of namespaces based on your
storage configuration. The information below provides guidance on how to modify
your namespace limits and what expect when operating a Vault cluster with 7000+
namespaces.
## Default namespace limits
@include 'namespace-limits.mdx'
## How to modify your namespace limit
@include 'storage-entry-size.mdx'
## Performance considerations
Running Vault with thousands of namespaces can have operational impacts on a
cluster. Below are some performance considerations to take into account before
using thousands of namespaces.
It is **not** recommended to use thousands of namespaces with any version of Vault
lower than 1.13.9, 1.14.5, or 1.15.0. Improvements were released in those
versions which can improve the reliability of Raft heartbeats when using many
namespaces.
<Note title="Testing parameters">
The aggregated performance data below assumes a 3-node Vault cluster running
on N2 standard VMs with Google Kubernetes Engine, default mounts, and
integrated storage. The results average metrics from multiple `n2-standard-16`
and `n2-standard-32` VMs with a varying number of namespaces.
</Note>
### Unseal times
Vault sets up and initializes every mount after an unseal event. At minimum,
the initialization process includes the default mounts for all active namespaces
(`sys`, `identity`, `cubbyhole`, and `token`).
The more namespaces and custom mounts in the deployment, the longer the
post-unseal initialization takes. As a result, **even with auto-unseal**, Vault
will be unresponsive during initialization for deployments with many namespaces.
Post-unseal times observed during testing:
| Number of namespaces | Unseal initialization time |
|----------------------|-------------------|
| 10 | ~5 seconds |
| 10000 | ~2-3 minutes |
| 20000 | ~12-14 minutes |
| 30000 | ~33-36 minutes |
### Cluster leadership transfer times
Vault high availability clusters have a leader (also known as an active node)
which is the server that accepts writes to the cluster and replicates the
written data to the follower nodes. If the leader crashes or needs to be removed
from the cluster, one of the follower nodes must take over leadership. This is
known as a leadership transfer.
Whenever a leadership transfer happens, the new active node must go through all
of the mounts in the cluster and set them up before the node can be ready to be
the leader. Because every namespace has at least 4 mounts (`sys`, `identity`,
`cubbyhole`, and `token`), the time for a leadership transfer to complete will
increase with the number of namespaces.
Leadership transfer times observed for the [`vault operator step-down`](/vault/docs/commands/operator/step-down)
command:
| Number of namespaces | Time until a node is elected as leader |
|----------------------|----------------------------------------|
| 10 | ~2 seconds |
| 10000 | ~33-45 seconds |
| 20000 | ~1-2 minutes |
| 30000 | ~4 minutes |
## System requirements
### Minimum memory requirements
Each namespace requires at least 435 KB of memory to store information
about the paths available within the namespace. Given `N` namespaces, your
Vault deployment must include at least (435 x N) KB memory for namespace support
to avoid degraded performance.
### Rollback and rotation worker requirements
Sometimes, Vault secret and auth engines need to clean up data after a request
is canceled or a request fails halfway through. Vault issues rollback operations
every minute to each mount in order to periodically trigger the clean up
process.
By default, Vault uses 256 workers to perform rollback operations. Mounts with a
large number of namespaces can become bottlenecks that slow down the overall
rollback process. The effects of the slowdown vary based on the particular
mounts. At minimum, your Vault deployment will take longer to fully purge stale
data and periodic rotations may happen less frequently than intended.
You can tell whether the number of rollback workers is sufficient by monitoring
the following metrics:
| Expected range | Metric |
|----------------|--------------------------------------------------------------------------------------------------|
| 0 – 256 | [`vault.rollback.queued`](/vault/docs/internals/telemetry/metrics/core-system#rollback-metrics) |
| 0 – 60000 | [`vault.rollback.waiting`](/vault/docs/internals/telemetry/metrics/core-system#rollback-metrics) |
## Identity secret engine warnings
When using OIDC with many namespaces, you may see warnings in your Vault logs
from the `identity` secret mount under the `root` namespace. For example:
```text
2023-10-24T15:47:56.594Z [WARN] secrets.identity.identity_51eb2411: error expiring OIDC public keys: err="context deadline exceeded"
2023-10-24T15:47:56.594Z [WARN] secrets.identity.identity_51eb2411: error rotating OIDC keys: err="context deadline exceeded"
```
The `secrets.identity` warnings occur because the root namespace is responsible
for rotating the [OIDC keys](/vault/docs/secrets/identity/oidc-provider) of all
other namespaces.
<Warning title="Avoid OIDC with many namespaces">
Using Vault as an [OIDC provider](/vault/docs/concepts/oidc-provider) with
many namespaces can severely delay the rotation and invalidation of OIDC keys.
</Warning> | vault | layout docs page title Run Vault Enterprise with many namespaces description Guidance for using thousands of namespaces with Vault Enterprise Run Vault Enterprise with many namespaces Use namespaces to create isolated environments within Vault Enterprise By default Vault limits the number and depth of namespaces based on your storage configuration The information below provides guidance on how to modify your namespace limits and what expect when operating a Vault cluster with 7000 namespaces Default namespace limits include namespace limits mdx How to modify your namespace limit include storage entry size mdx Performance considerations Running Vault with thousands of namespaces can have operational impacts on a cluster Below are some performance considerations to take into account before using thousands of namespaces It is not recommended to use thousands of namespaces with any version of Vault lower than 1 13 9 1 14 5 or 1 15 0 Improvements were released in those versions which can improve the reliability of Raft heartbeats when using many namespaces Note title Testing parameters The aggregated performance data below assumes a 3 node Vault cluster running on N2 standard VMs with Google Kubernetes Engine default mounts and integrated storage The results average metrics from multiple n2 standard 16 and n2 standard 32 VMs with a varying number of namespaces Note Unseal times Vault sets up and initializes every mount after an unseal event At minimum the initialization process includes the default mounts for all active namespaces sys identity cubbyhole and token The more namespaces and custom mounts in the deployment the longer the post unseal initialization takes As a result even with auto unseal Vault will be unresponsive during initialization for deployments with many namespaces Post unseal times observed during testing Number of namespaces Unseal initialization time 10 5 seconds 10000 2 3 minutes 20000 12 14 minutes 30000 33 36 minutes Cluster leadership transfer times Vault high availability clusters have a leader also known as an active node which is the server that accepts writes to the cluster and replicates the written data to the follower nodes If the leader crashes or needs to be removed from the cluster one of the follower nodes must take over leadership This is known as a leadership transfer Whenever a leadership transfer happens the new active node must go through all of the mounts in the cluster and set them up before the node can be ready to be the leader Because every namespace has at least 4 mounts sys identity cubbyhole and token the time for a leadership transfer to complete will increase with the number of namespaces Leadership transfer times observed for the vault operator step down vault docs commands operator step down command Number of namespaces Time until a node is elected as leader 10 2 seconds 10000 33 45 seconds 20000 1 2 minutes 30000 4 minutes System requirements Minimum memory requirements Each namespace requires at least 435 KB of memory to store information about the paths available within the namespace Given N namespaces your Vault deployment must include at least 435 x N KB memory for namespace support to avoid degraded performance Rollback and rotation worker requirements Sometimes Vault secret and auth engines need to clean up data after a request is canceled or a request fails halfway through Vault issues rollback operations every minute to each mount in order to periodically trigger the clean up process By default Vault uses 256 workers to perform rollback operations Mounts with a large number of namespaces can become bottlenecks that slow down the overall rollback process The effects of the slowdown vary based on the particular mounts At minimum your Vault deployment will take longer to fully purge stale data and periodic rotations may happen less frequently than intended You can tell whether the number of rollback workers is sufficient by monitoring the following metrics Expected range Metric 0 256 vault rollback queued vault docs internals telemetry metrics core system rollback metrics 0 60000 vault rollback waiting vault docs internals telemetry metrics core system rollback metrics Identity secret engine warnings When using OIDC with many namespaces you may see warnings in your Vault logs from the identity secret mount under the root namespace For example text 2023 10 24T15 47 56 594Z WARN secrets identity identity 51eb2411 error expiring OIDC public keys err context deadline exceeded 2023 10 24T15 47 56 594Z WARN secrets identity identity 51eb2411 error rotating OIDC keys err context deadline exceeded The secrets identity warnings occur because the root namespace is responsible for rotating the OIDC keys vault docs secrets identity oidc provider of all other namespaces Warning title Avoid OIDC with many namespaces Using Vault as an OIDC provider vault docs concepts oidc provider with many namespaces can severely delay the rotation and invalidation of OIDC keys Warning |
vault Vault Enterprise has support for Namespaces a feature to enable Secure Vault Enterprise namespaces EnterpriseAlert product vault inline true layout docs Multi tenancy SMT and self management page title Namespaces Vault Enterprise | ---
layout: docs
page_title: Namespaces - Vault Enterprise
description: >-
Vault Enterprise has support for Namespaces, a feature to enable Secure
Multi-tenancy (SMT) and self-management.
---
# Vault Enterprise namespaces <EnterpriseAlert product=vault inline=true />
Many organizations implement Vault as a service to provide centralized
management of sensitive data and ensure that the different teams in an
organization operate within isolated environments known as **tenants**.
Multi-tenant environments have the following implementation challenges:
- **Tenant isolation**. Teams within a Vault as a Service (VaaS)
environment require strong isolation for their policies, secrets, and
identities. Tenant isolation may also be required due to organizational
security and privacy requirements or to address compliance regulations like
[GDPR](https://gdpr.eu).
- **Long-term management**. Tenants typically have different policies and teams
request changes to their tenants at different rates. As a result, managing a
multi-tenant environment can become difficult for a single team as the number
of tenants within the organization grows.
Namespaces support secure multi-tenancy (**SMT**) within a single Vault
Enterprise instance with tenant isolation and administration delegation so Vault
administrators can empower delegates to manage their own tenant environment.
When you create a namespace, you establish an isolated environment with separate
login paths that functions as a mini-Vault instance within your Vault
installation. Users can then create and manage their sensitive data within the
confines of that namespace, including:
- secret engines
- authentication methods
- ACL, EGP, and RGP policies
- password policies
- entities
- identity groups
- tokens
<Tip>
Namespaces are isolated environments, but Vault administrators can still share
and enforce global policies across namespaces with the
[group-policy-application](/vault/api-docs/system/config-group-policy-application)
endpoint of the Vault API.
</Tip>
## Namespace naming restrictions
Valid Vault namespace names:
- **CANNOT** end with `/`
- **CANNOT** contain spaces
- **CANNOT** be one of the following reserved strings:
- `root`
- `sys`
- `audit`
- `auth`
- `cubbyhole`
- `identity`
Refer to the [Namespace limits section](/vault/docs/internals/limits#namespace-limits)
of [Vault limits and maximums](/vault/docs/internals/limits) for storage limits
related to managing namespaces.
<Tip title="Related reading">
Read the
[Vault namespace and mount structuring](/vault/tutorials/enterprise/namespace-structure)
tutorial for best practices and recommendations for structuring your namespaces.
</Tip>
## Child namespaces
A **child namespace** is any namespace that exists entirely within the scope of
another namespace. The containing namespace is the **parent namespace**. For
example, given the namespace path `A/B/C`:
- `A` is the top-most namespace and exists under the root namespace for the
Vault instance.
- `B` is a child namespace of `A` and the parent namespace of `C`.
- `C` is a child namespace of `B` and the grandchild namespace of `A`.
Children can inherit elements from their parent namespaces. For example,
policies for a child namespace might reference entities or groups from the parent
namespace. Parent namespaces can also **assert** policies on identities within
a child namespace.
Vault administrators can configure the desired inheritance behavior with the
[group-policy-application](/vault/api-docs/system/config-group-policy-application)
endpoint of the Vault API.
## Delegation and administrative namespaces
Vault system administrators can assign administration rights to delegate
admins to allow teams to self-manage their namespace. In addition to basic
management, delegate admins can create child namespaces and assign admin rights
to subordinate delegate admins.
Additionally,
[administrative namespaces](/vault/docs/enterprise/namespaces/create-admin-namespace)
let Vault administrators grant access to a
[predefined subset of privileged endpoints](#privileged-endpoints) by setting
the relevant namespace parameters in their Vault configuration file.
## Vault API and namespaces
Users can perform API operations under a specific namespace by setting the
`X-Vault-Namespace` header to the absolute or relative namespace path. Relative
namespace paths are assumed to be child namespaces of the calling namespace.
You can also provide an absolute namespace path without using the
`X-Vault-Namespace` header.
Vault constructs the fully qualified namespace path based on the calling
namespace and the `X-Vault` header to route the request to the
appropriate namespace. For example, the following requests all route to the
`ns1/ns2/secret/foo` namespace:
1. Path: `ns1/ns2/secret/foo`
2. Path: `secret/foo`, Header: `X-Vault-Namespace: ns1/ns2/`
3. Path: `ns2/secret/foo`, Header: `X-Vault-Namespace: ns1/`
<Tip title="Vault Enterprise has a namespaces API">
Use the [/sys/namespaces](/vault/api-docs/system/namespaces) API or
[`namespace`](/vault/docs/commands/namespace) CLI command to manage
your namespaces.
</Tip>
## Restricted API paths
The Vault API includes system backend endpoints, which are mounted under the
`sys/` path. System endpoints let you interact with the internal features of
your Vault instance.
By default, Vault allows non-root calls to the less-sensitive system backend
endpoints. But, for security reasons, Vault restricts access to some of the
system backend endpoints to calls from the root namespace or calls that use a
token in the root namespace with elevated permissions.
Rather than granting access to the full set of privileged `sys/` paths, Vault
administrators can also grant access to a predefined subset of the restricted
endpoints with an administrative namespace.
@include 'api/restricted-endpoints.mdx'
## Learn more
Refer to the following tutorials to learn more about Vault namespaces:
- [Secure Multi-Tenancy with Namespaces](/vault/tutorials/enterprise/namespaces)
- [Secrets Management Across Namespaces without Hierarchical
Relationship](/vault/tutorials/enterprise/namespaces-secrets-sharing)
- [Vault Namespace and Mount Structuring
Guide](/vault/tutorials/enterprise/namespace-structure)
- [HCP Vault Dedicated namespace
considerations](/vault/tutorials/cloud-ops/hcp-vault-namespace-considerations)
- [Using many Namespaces](/vault/docs/enterprise/namespaces/namespace-limits) | vault | layout docs page title Namespaces Vault Enterprise description Vault Enterprise has support for Namespaces a feature to enable Secure Multi tenancy SMT and self management Vault Enterprise namespaces EnterpriseAlert product vault inline true Many organizations implement Vault as a service to provide centralized management of sensitive data and ensure that the different teams in an organization operate within isolated environments known as tenants Multi tenant environments have the following implementation challenges Tenant isolation Teams within a Vault as a Service VaaS environment require strong isolation for their policies secrets and identities Tenant isolation may also be required due to organizational security and privacy requirements or to address compliance regulations like GDPR https gdpr eu Long term management Tenants typically have different policies and teams request changes to their tenants at different rates As a result managing a multi tenant environment can become difficult for a single team as the number of tenants within the organization grows Namespaces support secure multi tenancy SMT within a single Vault Enterprise instance with tenant isolation and administration delegation so Vault administrators can empower delegates to manage their own tenant environment When you create a namespace you establish an isolated environment with separate login paths that functions as a mini Vault instance within your Vault installation Users can then create and manage their sensitive data within the confines of that namespace including secret engines authentication methods ACL EGP and RGP policies password policies entities identity groups tokens Tip Namespaces are isolated environments but Vault administrators can still share and enforce global policies across namespaces with the group policy application vault api docs system config group policy application endpoint of the Vault API Tip Namespace naming restrictions Valid Vault namespace names CANNOT end with CANNOT contain spaces CANNOT be one of the following reserved strings root sys audit auth cubbyhole identity Refer to the Namespace limits section vault docs internals limits namespace limits of Vault limits and maximums vault docs internals limits for storage limits related to managing namespaces Tip title Related reading Read the Vault namespace and mount structuring vault tutorials enterprise namespace structure tutorial for best practices and recommendations for structuring your namespaces Tip Child namespaces A child namespace is any namespace that exists entirely within the scope of another namespace The containing namespace is the parent namespace For example given the namespace path A B C A is the top most namespace and exists under the root namespace for the Vault instance B is a child namespace of A and the parent namespace of C C is a child namespace of B and the grandchild namespace of A Children can inherit elements from their parent namespaces For example policies for a child namespace might reference entities or groups from the parent namespace Parent namespaces can also assert policies on identities within a child namespace Vault administrators can configure the desired inheritance behavior with the group policy application vault api docs system config group policy application endpoint of the Vault API Delegation and administrative namespaces Vault system administrators can assign administration rights to delegate admins to allow teams to self manage their namespace In addition to basic management delegate admins can create child namespaces and assign admin rights to subordinate delegate admins Additionally administrative namespaces vault docs enterprise namespaces create admin namespace let Vault administrators grant access to a predefined subset of privileged endpoints privileged endpoints by setting the relevant namespace parameters in their Vault configuration file Vault API and namespaces Users can perform API operations under a specific namespace by setting the X Vault Namespace header to the absolute or relative namespace path Relative namespace paths are assumed to be child namespaces of the calling namespace You can also provide an absolute namespace path without using the X Vault Namespace header Vault constructs the fully qualified namespace path based on the calling namespace and the X Vault header to route the request to the appropriate namespace For example the following requests all route to the ns1 ns2 secret foo namespace 1 Path ns1 ns2 secret foo 2 Path secret foo Header X Vault Namespace ns1 ns2 3 Path ns2 secret foo Header X Vault Namespace ns1 Tip title Vault Enterprise has a namespaces API Use the sys namespaces vault api docs system namespaces API or namespace vault docs commands namespace CLI command to manage your namespaces Tip Restricted API paths The Vault API includes system backend endpoints which are mounted under the sys path System endpoints let you interact with the internal features of your Vault instance By default Vault allows non root calls to the less sensitive system backend endpoints But for security reasons Vault restricts access to some of the system backend endpoints to calls from the root namespace or calls that use a token in the root namespace with elevated permissions Rather than granting access to the full set of privileged sys paths Vault administrators can also grant access to a predefined subset of the restricted endpoints with an administrative namespace include api restricted endpoints mdx Learn more Refer to the following tutorials to learn more about Vault namespaces Secure Multi Tenancy with Namespaces vault tutorials enterprise namespaces Secrets Management Across Namespaces without Hierarchical Relationship vault tutorials enterprise namespaces secrets sharing Vault Namespace and Mount Structuring Guide vault tutorials enterprise namespace structure HCP Vault Dedicated namespace considerations vault tutorials cloud ops hcp vault namespace considerations Using many Namespaces vault docs enterprise namespaces namespace limits |
vault Explains HashiCorp s recommended approach to structuring the Vault namespaces and how namespaces impact on the endpoint paths page title Namespace and mount structure guide Namespaces are isolated environments that functionally create Vaults within a Namespace and mount structure guide layout docs | ---
layout: docs
page_title: Namespace and mount structure guide
description: >-
Explains HashiCorp's recommended approach to structuring the Vault namespaces, and how namespaces impact on the endpoint paths.
---
# Namespace and mount structure guide
Namespaces are isolated environments that functionally create "Vaults within a
Vault." They have separate login paths, and support creating and managing data
isolated to their namespace. This functionality enables you to provide Vault as
a service to tenants.

This guide provides recommended approach to structuring Vault namespaces and
mount paths, as well as some guidance around how to make decisions for
namespaces and paths structuring, given the organizational structure and use
cases.
### Why is this topic important?
Everything in Vault is path-based. Each path corresponds to an operation or
secret in Vault, and the Vault API endpoints map to these paths; therefore,
writing policies configures the permitted operations to specific secret paths.
For example, to grant access to manage tokens in the root namespace, the policy
path is `auth/token/*`. To manage tokens for the _education_ namespace, the
fully-qualified path functionally becomes `education/auth/token/*`.
The following diagram demonstrates the API paths based on where the auth method
and secrets engines are enabled.

You can isolate secrets using namespaces or mounts dedicated to each Vault
client. For example, you can create a namespace for each isolated tenant and
they are responsible for managing the resources under their namespace.
Alternatively, you can mount a dedicated secrets engine at a path dedicated to
each team within the organization.

Depending on how you isolate the secrets, it determines who is responsible for
managing those secrets, and more importantly, policies related to those secrets.
<Note>
The creation of namespaces should be performed by a user with a highly
privileged token such as `root` to set up isolated environments for each
organization, team, or application.
</Note>
## Deployment considerations
To plan and design the Vault namespaces, auth method paths and secrets engine
paths, you need to consider how to best structure Vault's logical objects for
your organization.
<table>
<thead>
<tr>
<th>Requirements</th>
<th>What to consider</th>
</tr>
</thead>
<tbody>
<tr>
<td>Organizational structure</td>
<td>
<ul>
<li>What is your organizational structure?</li>
<li>What is the level of granularity across lines of businesses (LOBs), divisions, teams, services, apps that needs to be reflected in Vault's end-state design?</li>
</ul>
</td>
</tr>
<tr>
<td>Self-service requirements</td>
<td>
<ul>
<li>Given your organizational structure, what is the desired level of <strong>self-service</strong> required?</li>
<li>How are Vault policies to be managed?</li>
<li>Will teams need to directly manage policies for their own scope of responsibility?</li>
<li>Or, will they be interacting with Vault via some abstraction layer where policies and patterns will be templatized? For example, configuration by code, Git flows, the Terraform Vault provider, custom onboarding layers, or some combination of these.</li>
</ul>
</td>
</tr>
<tr>
<td>Audit requirements</td>
<td>
<ul>
<li>What are the requirements around auditing usage of Vault within your organization?</li>
<li>Is there a need to regularly certify access to secrets?</li>
<li>Is there a need to review and/or decommission stale secrets or auth roles?</li>
<li>Is there a need to determine chargeback amounts to internal customers?</li>
</ul>
</td>
</tr>
<tr>
<td>Secrets engine requirements</td>
<td>
What types of secrets engines will you use (KV, database, AD, PKI, etc.)? <br /><br />
For large organizations, each of these might require different structuring patterns. For example, with KV secrets engine, each team might have their own dedicated KV mount. However, for AD secrets engine, this is inherently a <em>shared</em> type of mount so you would manage access at a role level, rather than having multiple mounts that share the same connection configuration.
</td>
</tr>
</tbody>
</table>
## Chroot namespace
<Note title="Vault version">
To use the chroot listener feature, you must run **Vault Enterprise 1.15** or
later.
</Note>
Vault clients (users, applications, etc.) must be aware of which namespace to
send requests, and set the target namespace using `-namespace` flag,
`X-Vault-Namespace` HTTP header, or `VAULT_NAMESPACE` environment variable. If
the target namespace is not properly set, the request will fail. This can be
cumbersome.
To simplify, Vault operators can specify additional `listener` stanza in the
configuration file, and defines `chroot_namespace` to specify an alternate
top-level namespace.
**Example:**
<CodeBlockConfig filename="vault-config.hcl" highlight="17-25">
```hcl
ui = true
cluster_addr = "https://127.0.0.1:8201"
api_addr = "https://127.0.0.1:8200"
disable_mlock = true
storage "raft" {
path = "/path/to/raft/data"
node_id = "raft_node_1"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_cert_file = "/path/to/full-chain.pem"
tls_key_file = "/path/to/private-key.pem"
}
listener "tcp" {
address = "127.0.0.1:8300"
chroot_namespace = "usa-hq"
tls_cert_file = "/path/to/full-chain.pem"
tls_key_file = "/path/to/private-key.pem"
telemetry {
unauthenticated_metrics_access = true
}
}
telemetry {
statsite_address = "127.0.0.1:8125"
disable_hostname = true
}
```
</CodeBlockConfig>
The `chroot_namespace` specifies an alternate top-level namespace for the
listener, `https//127.0.0.1:8300`.
**Example request:**
```shell-session
$ curl --header "X-Vault-Namespace: team_1" \
--header "X-Vault-Token: $VAULT_TOKEN" \
--request POST \
--data '{"type": "kv-v2"}' \
https://127.0.0.1:8300/v1/sys/mounts/team-secret
```
The request operates on the `usa-hq/team_1` namespace since the top-level
namespace is set to `usa-hq` for the listener address, `127.0.0.1:8300`.
The top-level namespace for `https://127.0.0.1:8200` is `root`.
## General guidance
The following principles should be used to guide an appropriate namespace or
mount path structure.
- [Use namespaces sparingly](#use-namespaces-sparingly)
- [Leverage Vault identities](#leverage-vault-identities)
- [Understand Vault's mount points](#understand-vault-s-mount-points)
- [Granularity of paths](#granularity-of-paths)
- [Standardized onboarding process](#standardized-onboarding-process)
### Use namespaces sparingly
The primary purpose of namespaces is to delineate administrative boundaries.
The main determining factor for **encapsulating an organizational unit** into
its own namespace is the need for that unit to be able to directly manage
policies. However, many organizations may find their deployment requirements are
more nuanced, especially if they want to enable "self-service" for their
consumers of Vault.
When setting up Vault to be self-service, you should first ask what does
"self-service" actually mean to your organization.
- Will teams be managing Vault directly?
- Will there be an onboarding process/layer that teams interact with?
When possible, HashiCorp recommends providing the self-service capability by
implementing an onboarding layer rather than directly through Vault. The
onboarding layer can enforce a standard naming convention, secrets path
structure, and templated policies. In this case, the administrative boundary
is at the onboarding layer and not at the organizational unit level. As such,
this use case should not require a separate namespace for the team.

However, these teams may roll up to a specific platform team or LOB for which
the policy structuring, authentication methods, and secrets use cases are common
across all teams within that LOB. Here, it makes sense that the higher-level
organizational unit has its own namespace.
Additionally, in many cases, most of the desired level of isolation can be
enforced via ACL policies.
The entire list of namespaces must fit into a single storage entry in Vault, and
each namespace creates at least two secrets engines which also require storage
space. Namespace planning should include a review of the maximum number of
namespaces allowed by the storage entry size.
### Leverage Vault identities
It's also critical to understand identity in order to make use of [Vault ACL
templates](/vault/docs/concepts/policies#templated-policies),
which can ease policy management.
Vault provides an internal layer of identity management that can be used to map
entities to multiple auth methods as well as provide grouping capabilities. This
allows for more robust policy assignment options.
<Tip>
Visit the [identity alias name
table](/vault/docs/secrets/identity#mount-bound-aliases) documentation page to
learn about constructing templated ACL policies.
</Tip>
The entity aliases, based on specific information available from the auth
method, maps to identity entities that you create. You can use the default names
and associated metadata that are created for aliases and entities as part of
policy templates and deciding on naming conventions for secrets paths/roles.
This allows you to avoid having hard-coded policies for use cases that follow a
certain pattern broadly.
You can define identity groups to associate entities that should have
permissions in common, and reference those groups in policy templates as much as
you can entities and aliases. These groups may also be created automatically for
you, depending on the auth methods used.
**ACL policy template example:**
<CodeBlockConfig lineNumbers>
```hcl
path "kvv1-/*" {
capabilities = [ "create", "read", "update", "delete", "list" ]
}
path "transit/encrypt/" {
capabilities = [ "update" ]
}
```
</CodeBlockConfig>
Those templated values get resolved dynamically based on the requester's entity
token metadata.
At line 1, the `` value retrieves the
`team_name` value set on the entity's metadata. Similarly, the
`` value at line 5 returns
the Role ID of the requesting client. This enables your policies to be less
static.
<Note>
The number of identity entities is how Vault determines the number
of active clients for reporting and licensing purposes. Refer to the [Client
Count](/vault/docs/concepts/client-count) documentation for
more detail.
</Note>
<Tip>
If you are not familiar with templated policies, read the [ACL Policy Path
Templating](/vault/tutorials/policies/policy-templating) tutorial.
</Tip>
### Understand Vault's mount points
Auth methods and secrets engines can be categorized into two types:
1. **Dedicated:** Auth methods and secrets engines that can be managed and
mapped directly to a specific organizational unit. For example, the team that
manages `app-1` can utilize their own AppRole and/or KV mounts without the
ability to impact other teams' mounts.
1. **Shared:** An organization level resources such as Kubernetes auth method
and the Active Directory (AD) secrets engine that are **shared** and managed at
the company level; therefore, mounted at the company-level namespace.
It's important to understand [Vault's sizing
restrictions](/vault/docs/internals/limits) for mounts. All
secrets engine and auth method mount points must each fit within a single storage
entry. For Consul, the storage limit is 512KB. For Integrated Storage the limit
is 1MB.
Each JSON object describing a mount uses ~500 bytes, but in compressed form
it's ~75 bytes. Since auth mounts, secrets engine mount points, local-only auth
methods, and local-only secrets engine mounts are stored separately the limit
applies to each independently.
By default, each namespace is created with a token auth mount (`/auth/`), an
identity mount (`/identity/`), and system mount (`/sys/`). This means that each
namespace requires three different mounts and then you will add your custom
mounts. Multiply that by 1,000s means that your mount tables will grow
exponentially.
### Granularity of paths
When thinking about Vault's logical structure, you want to find the right
balance of granularity between the various mounts needed and the roles defined
within the mounts.
Sharing mounts between teams has benefits and risk. It is up to you to find the
right balance of granularity between the various mounts needed and the roles
defined within the mounts. Below are a couple use cases with their benefits and
risks.
You create a single KV mount with a sub-path for every team within the same
mount.
- **Benefit:** reduces potential of hitting mount table limits.
- **Risk:** the KV mount is accidentally deleted causing all users of that secret
engine to be
impacted.
You create a unique mount per LOB.
- **Benefit:** can provide sub-paths for different teams and limit the
blast-radius of an errant change to a single mount.
- **Risk:** unique KV mounts per team becomes inefficient from a mount
management perspective.

### Standardized onboarding process
When deploying Vault at scale, it is critical to Vault adoption to consider the
consumer experience. Specifically, it's important to reduce the level of
friction of consuming Vault. While it maybe quick to drop Vault into an
environment and interact with it directly, it's important to deliberately map
out how consumers will onboard to Vault and consume the service.
One of the pillars behind the Tao of Hashicorp is automation through
codification. Many HashiCorp users are using Terraform for managing
infrastructure on-prem and in the cloud. Terraform can also be used to [codify
Vault](/vault/tutorials/operations/codify-mgmt-enterprise)
configuration tasks such as creation of namespaces, policies, and mounts. This
allows Vault operators to increase their productivity, move quicker, promote
repeatable processes, and reduce human error.
## Tutorials
To learn more, review the following tutorials:
- [Secure multi-tenancy with namespaces](/vault/tutorials/enterprise/namespaces)
- [Vault recommended patterns](/vault/tutorials/recommended-patterns)
- [Vault standard operating procedures](/vault/tutorials/standard-procedures) | vault | layout docs page title Namespace and mount structure guide description Explains HashiCorp s recommended approach to structuring the Vault namespaces and how namespaces impact on the endpoint paths Namespace and mount structure guide Namespaces are isolated environments that functionally create Vaults within a Vault They have separate login paths and support creating and managing data isolated to their namespace This functionality enables you to provide Vault as a service to tenants Conceptual diagram for namespace usages img diagram namespaces for org png This guide provides recommended approach to structuring Vault namespaces and mount paths as well as some guidance around how to make decisions for namespaces and paths structuring given the organizational structure and use cases Why is this topic important Everything in Vault is path based Each path corresponds to an operation or secret in Vault and the Vault API endpoints map to these paths therefore writing policies configures the permitted operations to specific secret paths For example to grant access to manage tokens in the root namespace the policy path is auth token To manage tokens for the education namespace the fully qualified path functionally becomes education auth token The following diagram demonstrates the API paths based on where the auth method and secrets engines are enabled Namespaces and mount paths img diagram namespaces paths png You can isolate secrets using namespaces or mounts dedicated to each Vault client For example you can create a namespace for each isolated tenant and they are responsible for managing the resources under their namespace Alternatively you can mount a dedicated secrets engine at a path dedicated to each team within the organization Namespaces best practices img diagram namespaces intro png Depending on how you isolate the secrets it determines who is responsible for managing those secrets and more importantly policies related to those secrets Note The creation of namespaces should be performed by a user with a highly privileged token such as root to set up isolated environments for each organization team or application Note Deployment considerations To plan and design the Vault namespaces auth method paths and secrets engine paths you need to consider how to best structure Vault s logical objects for your organization table thead tr th Requirements th th What to consider th tr thead tbody tr td Organizational structure td td ul li What is your organizational structure li li What is the level of granularity across lines of businesses LOBs divisions teams services apps that needs to be reflected in Vault s end state design li ul td tr tr td Self service requirements td td ul li Given your organizational structure what is the desired level of strong self service strong required li li How are Vault policies to be managed li li Will teams need to directly manage policies for their own scope of responsibility li li Or will they be interacting with Vault via some abstraction layer where policies and patterns will be templatized For example configuration by code Git flows the Terraform Vault provider custom onboarding layers or some combination of these li ul td tr tr td Audit requirements td td ul li What are the requirements around auditing usage of Vault within your organization li li Is there a need to regularly certify access to secrets li li Is there a need to review and or decommission stale secrets or auth roles li li Is there a need to determine chargeback amounts to internal customers li ul td tr tr td Secrets engine requirements td td What types of secrets engines will you use KV database AD PKI etc br br For large organizations each of these might require different structuring patterns For example with KV secrets engine each team might have their own dedicated KV mount However for AD secrets engine this is inherently a em shared em type of mount so you would manage access at a role level rather than having multiple mounts that share the same connection configuration td tr tbody table Chroot namespace Note title Vault version To use the chroot listener feature you must run Vault Enterprise 1 15 or later Note Vault clients users applications etc must be aware of which namespace to send requests and set the target namespace using namespace flag X Vault Namespace HTTP header or VAULT NAMESPACE environment variable If the target namespace is not properly set the request will fail This can be cumbersome To simplify Vault operators can specify additional listener stanza in the configuration file and defines chroot namespace to specify an alternate top level namespace Example CodeBlockConfig filename vault config hcl highlight 17 25 hcl ui true cluster addr https 127 0 0 1 8201 api addr https 127 0 0 1 8200 disable mlock true storage raft path path to raft data node id raft node 1 listener tcp address 127 0 0 1 8200 tls cert file path to full chain pem tls key file path to private key pem listener tcp address 127 0 0 1 8300 chroot namespace usa hq tls cert file path to full chain pem tls key file path to private key pem telemetry unauthenticated metrics access true telemetry statsite address 127 0 0 1 8125 disable hostname true CodeBlockConfig The chroot namespace specifies an alternate top level namespace for the listener https 127 0 0 1 8300 Example request shell session curl header X Vault Namespace team 1 header X Vault Token VAULT TOKEN request POST data type kv v2 https 127 0 0 1 8300 v1 sys mounts team secret The request operates on the usa hq team 1 namespace since the top level namespace is set to usa hq for the listener address 127 0 0 1 8300 The top level namespace for https 127 0 0 1 8200 is root General guidance The following principles should be used to guide an appropriate namespace or mount path structure Use namespaces sparingly use namespaces sparingly Leverage Vault identities leverage vault identities Understand Vault s mount points understand vault s mount points Granularity of paths granularity of paths Standardized onboarding process standardized onboarding process Use namespaces sparingly The primary purpose of namespaces is to delineate administrative boundaries The main determining factor for encapsulating an organizational unit into its own namespace is the need for that unit to be able to directly manage policies However many organizations may find their deployment requirements are more nuanced especially if they want to enable self service for their consumers of Vault When setting up Vault to be self service you should first ask what does self service actually mean to your organization Will teams be managing Vault directly Will there be an onboarding process layer that teams interact with When possible HashiCorp recommends providing the self service capability by implementing an onboarding layer rather than directly through Vault The onboarding layer can enforce a standard naming convention secrets path structure and templated policies In this case the administrative boundary is at the onboarding layer and not at the organizational unit level As such this use case should not require a separate namespace for the team Namespaces best practices img diagram namespaces bp png However these teams may roll up to a specific platform team or LOB for which the policy structuring authentication methods and secrets use cases are common across all teams within that LOB Here it makes sense that the higher level organizational unit has its own namespace Additionally in many cases most of the desired level of isolation can be enforced via ACL policies The entire list of namespaces must fit into a single storage entry in Vault and each namespace creates at least two secrets engines which also require storage space Namespace planning should include a review of the maximum number of namespaces allowed by the storage entry size Leverage Vault identities It s also critical to understand identity in order to make use of Vault ACL templates vault docs concepts policies templated policies which can ease policy management Vault provides an internal layer of identity management that can be used to map entities to multiple auth methods as well as provide grouping capabilities This allows for more robust policy assignment options Tip Visit the identity alias name table vault docs secrets identity mount bound aliases documentation page to learn about constructing templated ACL policies Tip The entity aliases based on specific information available from the auth method maps to identity entities that you create You can use the default names and associated metadata that are created for aliases and entities as part of policy templates and deciding on naming conventions for secrets paths roles This allows you to avoid having hard coded policies for use cases that follow a certain pattern broadly You can define identity groups to associate entities that should have permissions in common and reference those groups in policy templates as much as you can entities and aliases These groups may also be created automatically for you depending on the auth methods used ACL policy template example CodeBlockConfig lineNumbers hcl path kvv1 capabilities create read update delete list path transit encrypt capabilities update CodeBlockConfig Those templated values get resolved dynamically based on the requester s entity token metadata At line 1 the value retrieves the team name value set on the entity s metadata Similarly the value at line 5 returns the Role ID of the requesting client This enables your policies to be less static Note The number of identity entities is how Vault determines the number of active clients for reporting and licensing purposes Refer to the Client Count vault docs concepts client count documentation for more detail Note Tip If you are not familiar with templated policies read the ACL Policy Path Templating vault tutorials policies policy templating tutorial Tip Understand Vault s mount points Auth methods and secrets engines can be categorized into two types 1 Dedicated Auth methods and secrets engines that can be managed and mapped directly to a specific organizational unit For example the team that manages app 1 can utilize their own AppRole and or KV mounts without the ability to impact other teams mounts 1 Shared An organization level resources such as Kubernetes auth method and the Active Directory AD secrets engine that are shared and managed at the company level therefore mounted at the company level namespace It s important to understand Vault s sizing restrictions vault docs internals limits for mounts All secrets engine and auth method mount points must each fit within a single storage entry For Consul the storage limit is 512KB For Integrated Storage the limit is 1MB Each JSON object describing a mount uses 500 bytes but in compressed form it s 75 bytes Since auth mounts secrets engine mount points local only auth methods and local only secrets engine mounts are stored separately the limit applies to each independently By default each namespace is created with a token auth mount auth an identity mount identity and system mount sys This means that each namespace requires three different mounts and then you will add your custom mounts Multiply that by 1 000s means that your mount tables will grow exponentially Granularity of paths When thinking about Vault s logical structure you want to find the right balance of granularity between the various mounts needed and the roles defined within the mounts Sharing mounts between teams has benefits and risk It is up to you to find the right balance of granularity between the various mounts needed and the roles defined within the mounts Below are a couple use cases with their benefits and risks You create a single KV mount with a sub path for every team within the same mount Benefit reduces potential of hitting mount table limits Risk the KV mount is accidentally deleted causing all users of that secret engine to be impacted You create a unique mount per LOB Benefit can provide sub paths for different teams and limit the blast radius of an errant change to a single mount Risk unique KV mounts per team becomes inefficient from a mount management perspective Compare a single mount vs multiple mounts img diagram namespaces kv paths png Standardized onboarding process When deploying Vault at scale it is critical to Vault adoption to consider the consumer experience Specifically it s important to reduce the level of friction of consuming Vault While it maybe quick to drop Vault into an environment and interact with it directly it s important to deliberately map out how consumers will onboard to Vault and consume the service One of the pillars behind the Tao of Hashicorp is automation through codification Many HashiCorp users are using Terraform for managing infrastructure on prem and in the cloud Terraform can also be used to codify Vault vault tutorials operations codify mgmt enterprise configuration tasks such as creation of namespaces policies and mounts This allows Vault operators to increase their productivity move quicker promote repeatable processes and reduce human error Tutorials To learn more review the following tutorials Secure multi tenancy with namespaces vault tutorials enterprise namespaces Vault recommended patterns vault tutorials recommended patterns Vault standard operating procedures vault tutorials standard procedures |
vault page title Configure an administrative namespace Create an administrative namespace EnterpriseAlert product vault inline true Enterprise layout docs Step by step guide for setting up an administrative namespace with Vault | ---
layout: docs
page_title: Configure an administrative namespace
description: >-
Step-by-step guide for setting up an administrative namespace with Vault
Enterprise
---
# Create an administrative namespace <EnterpriseAlert product=vault inline=true />
Grant access to a predefined subset of privileged system backend endpoints in
the Vault API with an administrative namespace.
<Tip title="HCP Vault Dedicated has a built-in administrative namespace">
HCP Vault Dedicated clusters include an administrative namespace (`admin`) by default.
For more information on managing namespaces with HCP Vault Dedicated, refer to the
[HCP Vault Dedicated namespace considerations](/vault/tutorials/cloud-ops/hcp-vault-namespace-considerations)
guide.
</Tip>
## Before you start
- **You must have Vault Enterprise 1.15+ installed and running**.
- **You must have access to your Vault configuration file**.
- **You must have permission to create and manage namespaces for your Vault instance**.
## Step 1: Create your namespace
Use the `namespace create` CLI command to create a new namespace:
```shell-session
$ vault namespace create YOUR_NAMESPACE_NAME
```
For example, to create a namespace called "ns_admin" under the root namespace:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault namespace create ns_admin
```
</CodeBlockConfig>
## Step 2: Give the namespace admin permission
To create an administrative namespace, set the `administrative_namespace_path`
parameter in your Vault configuration with the absolute path of your new
namespace. We recommend setting the namespace path with the other string
assignments in your configuration file. For example:
<CodeBlockConfig highlight="3">
```hcl
ui = true
api_addr = "https://127.0.0.1:8200"
administrative_namespace_path = "ns_admin/"
```
</CodeBlockConfig>
## Step 3: Verify the new permissions
To verify permissions for the administrative namespace, compare API responses
from a restricted endpoint from your new namespace and another namespace without
elevated permissions.
1. If you do not already have a namespace you can use for testing, create a test
namespace called "ns_test" with the `namespace create` CLI command:
```shell-session
$ vault namespace create ns_test
```
1. Use the `monitor` CLI command to call the `/sys/monitor` endpoint from your
test namespace:
```shell-session
$ env VAULT_NAMESPACE="ns_test" vault monitor –log-level=debug
```
You should see an unsupported path error:
<CodeBlockConfig hideClipboard>
```shell-session
$ env VAULT_NAMESPACE="ns_test" vault monitor –log-level=debug
Error starting monitor: Error making API request.
Namespace: ns_test/
URL: GET http://127.0.0.1:8400/v1/sys/monitor?log_format=standard&log_level=debug
Code: 404. Errors:
* 1 error occurred:
* unsupported path
```
</CodeBlockConfig>
1. Now use the `monitor` command to call the `sys/monitor` endpoint from your
administrative namespace:
```shell-session
$ env VAULT_NAMESPACE="ns_admin" vault monitor –log-level=debug
```
You should see log data from your Vault instance streaming to the terminal:
<CodeBlockConfig hideClipboard>
```shell-session
$ env VAULT_NAMESPACE="ns_admin" vault monitor –log-level=debug
2023-08-31T11:54:41.846+0200 [DEBUG] replication.index.perf: saved checkpoint: num_dirty=0
2023-08-31T11:54:41.961+0200 [DEBUG] replication.index.local: saved checkpoint: num_dirty=0
```
</CodeBlockConfig>
## Next steps
- Follow the [Secure multi-tenancy with namespaces](/vault/tutorials/enterprise/namespaces)
tutorial to provide additional security and ensure teams can self-manage their
own environments.
- Read more about [managing namespaces in Vault Enterprise](/vault/docs/enterprise/namespaces) | vault | layout docs page title Configure an administrative namespace description Step by step guide for setting up an administrative namespace with Vault Enterprise Create an administrative namespace EnterpriseAlert product vault inline true Grant access to a predefined subset of privileged system backend endpoints in the Vault API with an administrative namespace Tip title HCP Vault Dedicated has a built in administrative namespace HCP Vault Dedicated clusters include an administrative namespace admin by default For more information on managing namespaces with HCP Vault Dedicated refer to the HCP Vault Dedicated namespace considerations vault tutorials cloud ops hcp vault namespace considerations guide Tip Before you start You must have Vault Enterprise 1 15 installed and running You must have access to your Vault configuration file You must have permission to create and manage namespaces for your Vault instance Step 1 Create your namespace Use the namespace create CLI command to create a new namespace shell session vault namespace create YOUR NAMESPACE NAME For example to create a namespace called ns admin under the root namespace CodeBlockConfig hideClipboard shell session vault namespace create ns admin CodeBlockConfig Step 2 Give the namespace admin permission To create an administrative namespace set the administrative namespace path parameter in your Vault configuration with the absolute path of your new namespace We recommend setting the namespace path with the other string assignments in your configuration file For example CodeBlockConfig highlight 3 hcl ui true api addr https 127 0 0 1 8200 administrative namespace path ns admin CodeBlockConfig Step 3 Verify the new permissions To verify permissions for the administrative namespace compare API responses from a restricted endpoint from your new namespace and another namespace without elevated permissions 1 If you do not already have a namespace you can use for testing create a test namespace called ns test with the namespace create CLI command shell session vault namespace create ns test 1 Use the monitor CLI command to call the sys monitor endpoint from your test namespace shell session env VAULT NAMESPACE ns test vault monitor log level debug You should see an unsupported path error CodeBlockConfig hideClipboard shell session env VAULT NAMESPACE ns test vault monitor log level debug Error starting monitor Error making API request Namespace ns test URL GET http 127 0 0 1 8400 v1 sys monitor log format standard log level debug Code 404 Errors 1 error occurred unsupported path CodeBlockConfig 1 Now use the monitor command to call the sys monitor endpoint from your administrative namespace shell session env VAULT NAMESPACE ns admin vault monitor log level debug You should see log data from your Vault instance streaming to the terminal CodeBlockConfig hideClipboard shell session env VAULT NAMESPACE ns admin vault monitor log level debug 2023 08 31T11 54 41 846 0200 DEBUG replication index perf saved checkpoint num dirty 0 2023 08 31T11 54 41 961 0200 DEBUG replication index local saved checkpoint num dirty 0 CodeBlockConfig Next steps Follow the Secure multi tenancy with namespaces vault tutorials enterprise namespaces tutorial to provide additional security and ensure teams can self manage their own environments Read more about managing namespaces in Vault Enterprise vault docs enterprise namespaces |
vault Using the sys config group policy application endpoint you can enable secrets sharing layout docs Set up cross namespace access without hierarchical relationships for Vault Enterprise page title Configure cross namespace access without hierarchical relationships Configure cross namespace access | ---
layout: docs
page_title: Configure cross namespace access without hierarchical relationships
description: >-
Set up cross namespace access without hierarchical relationships for Vault Enterprise.
---
# Configure cross namespace access
Using the `sys/config/group_policy_application` endpoint, you can enable secrets sharing
across multiple independent namespaces.
Historically, any policies attached to an [identity group](/vault/docs/concepts/identity#identity-groups) would only apply when the
Vault token authorizing a request was created in the same namespace as that
group, or a descendent namespace.
This endpoint reduces the operational overhead by relaxing this restriction.
When the mode is set to the default, `within_namespace_hierarchy`, the
historical behaviour is maintained. When set to `any`, group policies apply to
all members of a group, regardless of what namespace the request token came
from.
## Prerequisites
- Vault Enterprise 1.13 or later
- Authentication method configured
## Enable secrets sharing
1. Verify the current setting.
```shell-session
$ vault read sys/config/group-policy-application
Key Value
--- -----
group_policy_application_mode within_namespace_hierarchy
```
`within_namespace_hierarchy` is the default setting.
1. Change the `group_policy_application_mode` setting to `any`.
```shell-session
$ vault write sys/config/group-policy-application \
group_policy_application_mode="any"
```
<CodeBlockConfig hideClipboard>
```plaintext
Success! Data written to: sys/config/group-policy-application
```
</CodeBlockConfig>
Policies can now be applied, and secrets shared, across namespaces without a
hierarchical relationship.
## Example auth method configuration
Cross namespace access can be used with all auth methods for both machine and
human based authentication. Examples of each are provided for reference.
<Tabs>
<Tab heading="Kubernetes" group="kubernetes">
1. Create and run a script to configure the Kubernetes auth method, and two
namespaces.
```plaintext
# Create new namespaces - they are peers
vault namespace create us-west-org
vault namespace create us-east-org
#--------------------------
# us-west-org namespace
#--------------------------
VAULT_NAMESPACE=us-west-org vault auth enable kubernetes
VAULT_NAMESPACE=us-west-org vault write auth/kubernetes/config out_of=scope
VAULT_NAMESPACE=us-west-org vault write auth/kubernetes/role/cross-namespace-demo bound_service_account_names="mega-app" bound_service_account_namespaces="client-nicecorp" alias_name_source="serviceaccount_name"
# Create an entity
VAULT_NAMESPACE=us-west-org vault auth list -format=json | jq -r '.["kubernetes/"].accessor' > accessor.txt
VAULT_NAMESPACE=us-west-org vault write -format=json identity/entity name="entity-for-mega-app" | jq -r ".data.id" > entity_id.txt
VAULT_NAMESPACE=us-west-org vault write identity/entity-alias name="client-nicecorp/mega-app" canonical_id=$(cat entity_id.txt) mount_accessor=$(cat accessor.txt)
#--------------------------
# us-east-org namespace
#--------------------------
VAULT_NAMESPACE=us-east-org vault secrets enable -path="kv-marketing" kv-v2
VAULT_NAMESPACE=us-east-org vault kv put kv-marketing/campaign start_date="March 1, 2023" end_date="March 31, 2023" prise="Certification voucher" quantity="100"
# Create a policy to allow read access to kv-marketing
VAULT_NAMESPACE=us-east-org vault policy write marketing-read-only -<<EOF
path "kv-marketing/data/campaign" {
capabilities = ["read"]
}
EOF
# Create a group
VAULT_NAMESPACE=us-east-org vault write -format=json identity/group name="campaign-admin" policies="marketing-read-only" member_entity_ids=$(cat entity_id.txt)
```
1. Authenticate to the `us-west-org` Vault namespace with a valid JWT.
```shell-session
$ VAULT_NAMESPACE=us-west-org vault write -format=json auth/kubernetes/login role=cross-namespace-demo jwt=$(cat jwt.txt) | jq -r .auth.client_token > token.txt
```
1. Read a secret in the `us-east-org` namespace using the Vault token from
`us-west-org`.
```shell-session
$ VAULT_NAMESPACE=us-east-org VAULT_TOKEN=$(cat token.txt) vault kv get kv-marketing/campaign
```
</Tab>
<Tab heading="Userpass" group="userpass">
1. Create and run a script to configure the userpass auth method, and two
Vault namespaces.
```plaintext
# Create new namespaces - they are peer
vault namespace create us-west-org
vault namespace create us-east-org
#--------------------------
# us-west-org namespace
#--------------------------
VAULT_NAMESPACE=us-west-org vault secrets enable -path="kv-customer-info" kv-v2
VAULT_NAMESPACE=us-west-org vault kv put kv-customer-info/customer-001 name="Example LLC" contact_email="[email protected]"
# Create a policy to allow read access to kv-marketing
VAULT_NAMESPACE=us-west-org vault policy write customer-info-read-only -<<EOF
path "kv-customer-info/data/*" {
capabilities = ["read"]
}
EOF
VAULT_NAMESPACE=us-west-org vault auth enable userpass
VAULT_NAMESPACE=us-west-org vault write auth/userpass/users/tam-user password="my-long-password" policies=customer-info-read-only
# Create an entity
VAULT_NAMESPACE=us-west-org vault auth list -format=json | jq -r '.["userpass/"].accessor' > accessor.txt
VAULT_NAMESPACE=us-west-org vault write -format=json identity/entity name="TAM" | jq -r ".data.id" > entity_id.txt
VAULT_NAMESPACE=us-west-org vault write identity/entity-alias name="tam-user" canonical_id=$(cat entity_id.txt) mount_accessor=$(cat accessor.txt)
#--------------------------
# us-east-org namespace
#--------------------------
VAULT_NAMESPACE=us-east-org vault secrets enable -path="kv-marketing" kv-v2
VAULT_NAMESPACE=us-east-org vault kv put kv-marketing/campaign start_date="March 1, 2023" end_date="March 31, 2023" prise="Certification voucher" quantity="100"
# Create a policy to allow read access to kv-marketing
VAULT_NAMESPACE=us-east-org vault policy write marketing-read-only -<<EOF
path "kv-marketing/data/campaign" {
capabilities = ["read"]
}
EOF
# Create a group
VAULT_NAMESPACE=us-east-org vault write -format=json identity/group name="campaign-admin" policies="marketing-read-only" member_entity_ids=$(cat entity_id.txt)
```
1. Authenticate to the `us-west-org` Vault namespace with a valid user.
```shell-session
$ VAULT_NAMESPACE=us-west-org vault login -field=token -method=userpass \
username=tam-user password="my-long-password" > token.txt
```
1. Read a secret in the `us-east-org` namespace using the Vault token from
`us-west-org`.
```shell-session
$ VAULT_NAMESPACE=us-east-org VAULT_TOKEN=$(cat token.txt) \
vault kv get kv-marketing/campaign
```
</Tab>
</Tabs>
## API
- [/sys/config/group-policy-application](/vault/api-docs/system/config-group-policy-application)
## Tutorial
- [Secrets management across namespaces without hierarchical relationship](/vault/tutorials/enterprise/namespaces-secrets-sharing | vault | layout docs page title Configure cross namespace access without hierarchical relationships description Set up cross namespace access without hierarchical relationships for Vault Enterprise Configure cross namespace access Using the sys config group policy application endpoint you can enable secrets sharing across multiple independent namespaces Historically any policies attached to an identity group vault docs concepts identity identity groups would only apply when the Vault token authorizing a request was created in the same namespace as that group or a descendent namespace This endpoint reduces the operational overhead by relaxing this restriction When the mode is set to the default within namespace hierarchy the historical behaviour is maintained When set to any group policies apply to all members of a group regardless of what namespace the request token came from Prerequisites Vault Enterprise 1 13 or later Authentication method configured Enable secrets sharing 1 Verify the current setting shell session vault read sys config group policy application Key Value group policy application mode within namespace hierarchy within namespace hierarchy is the default setting 1 Change the group policy application mode setting to any shell session vault write sys config group policy application group policy application mode any CodeBlockConfig hideClipboard plaintext Success Data written to sys config group policy application CodeBlockConfig Policies can now be applied and secrets shared across namespaces without a hierarchical relationship Example auth method configuration Cross namespace access can be used with all auth methods for both machine and human based authentication Examples of each are provided for reference Tabs Tab heading Kubernetes group kubernetes 1 Create and run a script to configure the Kubernetes auth method and two namespaces plaintext Create new namespaces they are peers vault namespace create us west org vault namespace create us east org us west org namespace VAULT NAMESPACE us west org vault auth enable kubernetes VAULT NAMESPACE us west org vault write auth kubernetes config out of scope VAULT NAMESPACE us west org vault write auth kubernetes role cross namespace demo bound service account names mega app bound service account namespaces client nicecorp alias name source serviceaccount name Create an entity VAULT NAMESPACE us west org vault auth list format json jq r kubernetes accessor accessor txt VAULT NAMESPACE us west org vault write format json identity entity name entity for mega app jq r data id entity id txt VAULT NAMESPACE us west org vault write identity entity alias name client nicecorp mega app canonical id cat entity id txt mount accessor cat accessor txt us east org namespace VAULT NAMESPACE us east org vault secrets enable path kv marketing kv v2 VAULT NAMESPACE us east org vault kv put kv marketing campaign start date March 1 2023 end date March 31 2023 prise Certification voucher quantity 100 Create a policy to allow read access to kv marketing VAULT NAMESPACE us east org vault policy write marketing read only EOF path kv marketing data campaign capabilities read EOF Create a group VAULT NAMESPACE us east org vault write format json identity group name campaign admin policies marketing read only member entity ids cat entity id txt 1 Authenticate to the us west org Vault namespace with a valid JWT shell session VAULT NAMESPACE us west org vault write format json auth kubernetes login role cross namespace demo jwt cat jwt txt jq r auth client token token txt 1 Read a secret in the us east org namespace using the Vault token from us west org shell session VAULT NAMESPACE us east org VAULT TOKEN cat token txt vault kv get kv marketing campaign Tab Tab heading Userpass group userpass 1 Create and run a script to configure the userpass auth method and two Vault namespaces plaintext Create new namespaces they are peer vault namespace create us west org vault namespace create us east org us west org namespace VAULT NAMESPACE us west org vault secrets enable path kv customer info kv v2 VAULT NAMESPACE us west org vault kv put kv customer info customer 001 name Example LLC contact email admin example com Create a policy to allow read access to kv marketing VAULT NAMESPACE us west org vault policy write customer info read only EOF path kv customer info data capabilities read EOF VAULT NAMESPACE us west org vault auth enable userpass VAULT NAMESPACE us west org vault write auth userpass users tam user password my long password policies customer info read only Create an entity VAULT NAMESPACE us west org vault auth list format json jq r userpass accessor accessor txt VAULT NAMESPACE us west org vault write format json identity entity name TAM jq r data id entity id txt VAULT NAMESPACE us west org vault write identity entity alias name tam user canonical id cat entity id txt mount accessor cat accessor txt us east org namespace VAULT NAMESPACE us east org vault secrets enable path kv marketing kv v2 VAULT NAMESPACE us east org vault kv put kv marketing campaign start date March 1 2023 end date March 31 2023 prise Certification voucher quantity 100 Create a policy to allow read access to kv marketing VAULT NAMESPACE us east org vault policy write marketing read only EOF path kv marketing data campaign capabilities read EOF Create a group VAULT NAMESPACE us east org vault write format json identity group name campaign admin policies marketing read only member entity ids cat entity id txt 1 Authenticate to the us west org Vault namespace with a valid user shell session VAULT NAMESPACE us west org vault login field token method userpass username tam user password my long password token txt 1 Read a secret in the us east org namespace using the Vault token from us west org shell session VAULT NAMESPACE us east org VAULT TOKEN cat token txt vault kv get kv marketing campaign Tab Tabs API sys config group policy application vault api docs system config group policy application Tutorial Secrets management across namespaces without hierarchical relationship vault tutorials enterprise namespaces secrets sharing |
vault include alerts enterprise only mdx Exclusion syntax for audit results layout docs Learn about the behavior and syntax for excluding audit data in Vault Enterprise page title Exclusion syntax for audit results | ---
layout: docs
page_title: Exclusion syntax for audit results
description: >-
Learn about the behavior and syntax for excluding audit data in Vault Enterprise.
---
# Exclusion syntax for audit results
@include 'alerts/enterprise-only.mdx'
As of Vault 1.18.0, you can enable audit devices with an `exclude` option to exclude
specific fields in an audit entry that is written to a particular audit log, and fine-tune
your auditing process.
<Warning title="Proceed with caution">
Excluding audit entry fields is an advanced feature. Use of exclusion settings
could lead to missing data in your audit logs.
**Always** test your audit configuration in a non-production environment
before deploying exclusions to production. Read the
[Vault security model](/vault/docs/internals/security) and
[filtering overview](/vault/docs/concepts/filtering) to familiarize yourself
with Vault auditing and filtering basics before enabling audit devices that use
exclusions.
</Warning>
Once you enable an audit device with exclusions, every audit entry Vault sends to
that audit device is compared to an (optional) condition in the form of a predicate expression.
Vault checks exclusions before writing to the audit log for a device. Vault modifies
any audit entries that match the exclusion expression to remove the fields
specified for that condition. You can specify multiple sets of condition and field
combinations for an individual audit device.
When you enable audit devices that use exclusion, the behavior of any existing audit
device and the behavior of new audit devices that **do not** use exclusion remains
unchanged.
## `exclude` option
The value provided with the `exclude` option must be a parsable JSON array (i.e. JSON or
an escaped JSON string) of exclusion objects.
### Exclusion object
- `condition` `(string: <optional>)` - predicate expression using
[filtering syntax](/vault/docs/concepts/filtering). When matched, Vault removes
the values identified by `fields`.
- `fields` `(string[] <required>)` - collection of fields in the audit entry to exclude,
identified using [JSON pointer](https://tools.ietf.org/html/rfc6901) syntax.
```json
[
{
"condition": "",
"fields": [ "" ]
}
]
```
Vault always compares exclusion conditions against the original, immutable audit
entry (the 'golden source'). As a result, evaluating a given condition does not
affect the evaluation of subsequent conditions.
### Exclusion examples
#### Exclude response data (when present)
Exclude the response `data` field from any audit entry that contains it:
```json
[
{
"fields": [ "/response/data" ]
}
]
```
#### Exclude request data (when present) for transit mounts
Exclude the request `data` field for audit entries with a mount type of `transit`:
```json
[
{
"condition": "\"/request/mount_type\" == transit",
"fields": [ "/request/data" ]
}
]
```
#### Multiple exclusions
Use multiple JSON objects to exclude:
* `data` from both the request and response when the mount type is `transit`.
* `entity_id` from requests where the `/auth/client_token` starts with `hmac`
followed by at least one other character.
```json
[
{
"condition": "\"/request/mount_type\" == transit",
"fields": [ "/request/data", "/response/data" ]
},
{
"condition": "\"/auth/client_token\" matches \"hmac.+\"",
"fields": [ "/auth/entity_id" ]
}
]
```
## Audit entry structure
To accurately construct `condition` and `fields`, Vault operators need a solid
understanding of their audit entry structures. At a high level, there are only
**request** audit entries and **response** audit entries, but each of these
entries can contain different objects such as `auth`, `request` and `response`.
We strongly encourage operaters to review existing audit logs from a timeframe
of at least 2-4 weeks to better identify appropriate exclusion conditions and
fields.
### Request audit entry
```json
{
"auth": <auth>,
"error": "",
"forwarded_from": "",
"request": <request>,
"time": "",
"type": ""
}
```
### Response audit entry
```json
{
"auth": <auth>,
"error": "",
"forwarded_from": "",
"request": <request>,
"response": <response>,
"time": "",
"type": ""
}
```
### Auth object (`<auth>`)
The following auth object definition includes example data with simple types
(`string`, `bool`, `int`) and used in other JSON examples that include an
`<auth>` object.
```json
{
"accessor": "",
"client_token": "",
"display_name": "",
"entity_created": "",
"entity_id": "",
"external_namespace_policies": {
"allowed": true,
"granting_policies": [
{
"name": "",
"namespace_id": "",
"namespace_path": "",
"type": ""
}
]
},
"identity_policies": [
""
],
"metadata": {},
"no_default_policy": false,
"num_uses": 10,
"policies": [
""
],
"policy_results": {
"allowed": true,
"granting_policies": [
{
"name": "",
"namespace_id": "",
"namespace_path": "",
"type": ""
}
]
},
"remaining_uses": 5,
"token_policies": [
""
],
"token_issue_time": "",
"token_ttl": 3600,
"token_type": ""
}
```
### Request object (`<request>`)
The following request object definition includes example data with simple types
(`string`, `bool`, `int`) and used in other JSON examples that include a
`<request>` object.
```json
{
"client_certificate_serial_number": "",
"client_id": "",
"client_token": "",
"client_token_accessor": "",
"data": {},
"id": "",
"headers": {},
"mount_accessor": "",
"mount_class": "",
"mount_point": "",
"mount_type": "",
"mount_running_version": "",
"mount_running_sha256": "",
"mount_is_external_plugin": "",
"namespace": {
"id": "",
"path": ""
},
"operation": "",
"path": "",
"policy_override": true,
"remote_address": "",
"remote_port": 1234,
"replication_cluster": "",
"request_uri": "",
"wrap_ttl": 60
}
```
### Response object (`<response>`)
The following response object definition includes example data with simple types
(`string`, `bool`, `int`) and used in other JSON examples that include a
`<response>` object.
```json
{
"auth": <auth>,
"data": {},
"headers": {},
"mount_accessor": "",
"mount_class": "",
"mount_is_external_plugin": false,
"mount_point": "",
"mount_running_sha256": "",
"mount_running_plugin_version": "",
"mount_type": "",
"redirect": "",
"secret": {
"lease_id": ""
},
"wrap_info": {
"accessor": "",
"creation_path": "",
"creation_time": "",
"token": "",
"ttl": 60,
"wrapped_accessor": ""
},
"warnings": [
""
]
}
```
## Request audit entry schema
@include 'audit/request-entry-json-schema.mdx'
## Response audit entry schema
@include 'audit/request-entry-json-schema.mdx' | vault | layout docs page title Exclusion syntax for audit results description Learn about the behavior and syntax for excluding audit data in Vault Enterprise Exclusion syntax for audit results include alerts enterprise only mdx As of Vault 1 18 0 you can enable audit devices with an exclude option to exclude specific fields in an audit entry that is written to a particular audit log and fine tune your auditing process Warning title Proceed with caution Excluding audit entry fields is an advanced feature Use of exclusion settings could lead to missing data in your audit logs Always test your audit configuration in a non production environment before deploying exclusions to production Read the Vault security model vault docs internals security and filtering overview vault docs concepts filtering to familiarize yourself with Vault auditing and filtering basics before enabling audit devices that use exclusions Warning Once you enable an audit device with exclusions every audit entry Vault sends to that audit device is compared to an optional condition in the form of a predicate expression Vault checks exclusions before writing to the audit log for a device Vault modifies any audit entries that match the exclusion expression to remove the fields specified for that condition You can specify multiple sets of condition and field combinations for an individual audit device When you enable audit devices that use exclusion the behavior of any existing audit device and the behavior of new audit devices that do not use exclusion remains unchanged exclude option The value provided with the exclude option must be a parsable JSON array i e JSON or an escaped JSON string of exclusion objects Exclusion object condition string optional predicate expression using filtering syntax vault docs concepts filtering When matched Vault removes the values identified by fields fields string required collection of fields in the audit entry to exclude identified using JSON pointer https tools ietf org html rfc6901 syntax json condition fields Vault always compares exclusion conditions against the original immutable audit entry the golden source As a result evaluating a given condition does not affect the evaluation of subsequent conditions Exclusion examples Exclude response data when present Exclude the response data field from any audit entry that contains it json fields response data Exclude request data when present for transit mounts Exclude the request data field for audit entries with a mount type of transit json condition request mount type transit fields request data Multiple exclusions Use multiple JSON objects to exclude data from both the request and response when the mount type is transit entity id from requests where the auth client token starts with hmac followed by at least one other character json condition request mount type transit fields request data response data condition auth client token matches hmac fields auth entity id Audit entry structure To accurately construct condition and fields Vault operators need a solid understanding of their audit entry structures At a high level there are only request audit entries and response audit entries but each of these entries can contain different objects such as auth request and response We strongly encourage operaters to review existing audit logs from a timeframe of at least 2 4 weeks to better identify appropriate exclusion conditions and fields Request audit entry json auth auth error forwarded from request request time type Response audit entry json auth auth error forwarded from request request response response time type Auth object auth The following auth object definition includes example data with simple types string bool int and used in other JSON examples that include an auth object json accessor client token display name entity created entity id external namespace policies allowed true granting policies name namespace id namespace path type identity policies metadata no default policy false num uses 10 policies policy results allowed true granting policies name namespace id namespace path type remaining uses 5 token policies token issue time token ttl 3600 token type Request object request The following request object definition includes example data with simple types string bool int and used in other JSON examples that include a request object json client certificate serial number client id client token client token accessor data id headers mount accessor mount class mount point mount type mount running version mount running sha256 mount is external plugin namespace id path operation path policy override true remote address remote port 1234 replication cluster request uri wrap ttl 60 Response object response The following response object definition includes example data with simple types string bool int and used in other JSON examples that include a response object json auth auth data headers mount accessor mount class mount is external plugin false mount point mount running sha256 mount running plugin version mount type redirect secret lease id wrap info accessor creation path creation time token ttl 60 wrapped accessor warnings Request audit entry schema include audit request entry json schema mdx Response audit entry schema include audit request entry json schema mdx |
vault Learn about the behavior and syntax for filtering audit data in Vault Enterprise include alerts enterprise only mdx page title Filter syntax for audit results layout docs Filter syntax for audit results | ---
layout: docs
page_title: Filter syntax for audit results
description: >-
Learn about the behavior and syntax for filtering audit data in Vault Enterprise.
---
# Filter syntax for audit results
@include 'alerts/enterprise-only.mdx'
As of Vault 1.16.0, you can enable audit devices with a `filter` option to limit
the audit entries written to a particular audit log and fine-tune your auditing
process.
<Warning title="Proceed with caution">
Filtering audit logs is an advanced feature. Exclusively enabling filtered
devices without configuring an audit fallback may lead to gaps in your audit
logs.
**Always** test your audit configuration in a non-production environment
before deploying filters to production. And make sure to read the
[Vault security model](/vault/docs/internals/security) and
[filtering overview](/vault/docs/concepts/filtering) to familiarize yourself
with Vault auditing and filtering basics before enabling filtered audit
devices.
</Warning>
Once you enable an audit device with a filter, every audit entry Vault sends to
that audit device is compared to the predicate expression in the filter. Only
audit entries that match the filter are written to the audit log for the device.
When you enable filtered audit devices, the behavior of any existing audit
device and the behavior of new audit devices that **do not** use filters remain
unchanged.
## Fallback auditing devices
Filtering adds flexibility to your auditing workflows, but filtering also adds
complexity that can lead to entries missing from your logs by mistake. For
example, writing audit entries to one device for `(N < 10)` and another device
for `(N > 10)`would exclude audit entries where `(N == 10)`, which may not be
the intended behavior.
The fallback audit device saves all audit entries that would otherwise get
filtered out and dropped from the audit record. Enabling an audit device with
the `fallback` parameter ensures that Vault continues to adhere to the default
[security model](/vault/docs/internals/security) which mandates that all
requests and responses must be successfully logged before clients receive secret
material.
Vault installations that use filtered audit devices **exclusively**, should
always configure a fallback audit device to guarantee a comparable security
standard as Vault installations that only use standard, non-filtered audit
devices.
<Warning title="You can only have 1 fallback device">
Choose your fallback audit device carefully. You can only designate one
fallback audit device for the entire Vault installation
**out of all your active audit devices**.
</Warning>
### Fallback telemetry metrics
When the fallback device successfully writes an audit entry to the audit log,
Vault emits a
[fallback 'success' metric](/vault/docs/internals/telemetry/metrics/audit#vault-audit-fallback-success).
If you enable filtering **without** a fallback device, Vault emits a
[fallback 'miss' metric](/vault/docs/internals/telemetry/metrics/audit#vault-audit-fallback-miss)
metric anytime an audit entry would have been written to the fallback device, so you can
track how many auditable events you have lost.
## Audit device limitations
1. You cannot add filtering to an existing audit device.
1. You can configure filtering when enabling one of the following supported audit device types:
- [file](/vault/docs/audit/file)
- [socket](/vault/docs/audit/socket)
- [syslog](/vault/docs/audit/syslog)
1. You can only designate one auditing fallback device.
## Filtering and test messages
By default, Vault sends a test message to the audit device when you enable it.
Depending on how you configure your filters, the default test message may fail
the predicate expression and not write to the new device.
You can determine whether the test message should appear in the sink for the
newly enabled audit device based on the following property
table, which are common to all default test messages.
Property | Value
------------- | ----------------
`mount_point` | empty
`mount_type` | empty
`namespace` | empty
`operation` | `update`
`path` | `sys/audit/test`
## `filter` properties for audit devices
Filters can only reference the following properties of an audit entry:
Property | Example | Description
------------- | ----------------------------------- | --------------------------
`mount_point` | `mount_point == \"auth/oidc\"` | Log all entries for the `auth/oidc` mount point
`mount_type` | `mount_type == \"kv-v2\"` | Log all entries from `kv-v2` plugins
`namespace` | `namespace != \"admin/\"` | Log all entries **not** in the admin namespace
`operation` | `operation == \"read\"` | Log all read operations
`path` | `path == \"auth/approle/login\"` | Log all activity against the AppRole login path
<Tip title="Root namespaces are unnamed">
Non-root namespace paths **must** end with a trailing slash (`/`) to match correctly.
But the root namespace does not have a path and only matches to an empty
string. To match to the root namespace in your filter use `\"\"`. For example,
`namespace != \"\"` matches any audited request **not** in the root namespace.
</Tip>
## A practical example
Assume you already have an audit file called `vault-audit.log` but you want to
filter your audit entries and persist all the key/value (`kv`) type events to a
specific audit log file called `kv-audit.log`.
To filter the events:
1. Enable a `file` audit device with a `mount_type` filter:
```shell-session
vault audit enable \
-path kv-only \
file \
filter="mount_type == \"kv\"" \
file_path=/logs/kv-audit.log
1. Enable a fallback device:
```shell-session
vault audit enable \
-path=my-fallback \
-description="fallback device" \
file \
fallback=true \
file_path=/tmp/kv-audit.fallback.log
1. Confirm the audit devices are enabled:
```shell-session
vault audit list --detailed
```
1. Enable a new `kv` secrets engine called `my-kv`:
```shell-session
vault secrets enable -path my-kv kv-v2
```
1. Write secret data to the `kv` engine:
```shell-session
vault kv put -mount=my-kv my_secret the_value=always_angry
```
The `/var/kv-audit.log` now includes four entries in total:
- the command request that enabled `my-kv`
- the response entry from enabling `my-kv`
- the command request that wrote a secret to `my-kv`
- the response entry from writing the secret to `my-kv`.
The fallback device captured entries for the other commands. And the
original audit file, `vault-audit.log`, continues to capture all audit events. | vault | layout docs page title Filter syntax for audit results description Learn about the behavior and syntax for filtering audit data in Vault Enterprise Filter syntax for audit results include alerts enterprise only mdx As of Vault 1 16 0 you can enable audit devices with a filter option to limit the audit entries written to a particular audit log and fine tune your auditing process Warning title Proceed with caution Filtering audit logs is an advanced feature Exclusively enabling filtered devices without configuring an audit fallback may lead to gaps in your audit logs Always test your audit configuration in a non production environment before deploying filters to production And make sure to read the Vault security model vault docs internals security and filtering overview vault docs concepts filtering to familiarize yourself with Vault auditing and filtering basics before enabling filtered audit devices Warning Once you enable an audit device with a filter every audit entry Vault sends to that audit device is compared to the predicate expression in the filter Only audit entries that match the filter are written to the audit log for the device When you enable filtered audit devices the behavior of any existing audit device and the behavior of new audit devices that do not use filters remain unchanged Fallback auditing devices Filtering adds flexibility to your auditing workflows but filtering also adds complexity that can lead to entries missing from your logs by mistake For example writing audit entries to one device for N 10 and another device for N 10 would exclude audit entries where N 10 which may not be the intended behavior The fallback audit device saves all audit entries that would otherwise get filtered out and dropped from the audit record Enabling an audit device with the fallback parameter ensures that Vault continues to adhere to the default security model vault docs internals security which mandates that all requests and responses must be successfully logged before clients receive secret material Vault installations that use filtered audit devices exclusively should always configure a fallback audit device to guarantee a comparable security standard as Vault installations that only use standard non filtered audit devices Warning title You can only have 1 fallback device Choose your fallback audit device carefully You can only designate one fallback audit device for the entire Vault installation out of all your active audit devices Warning Fallback telemetry metrics When the fallback device successfully writes an audit entry to the audit log Vault emits a fallback success metric vault docs internals telemetry metrics audit vault audit fallback success If you enable filtering without a fallback device Vault emits a fallback miss metric vault docs internals telemetry metrics audit vault audit fallback miss metric anytime an audit entry would have been written to the fallback device so you can track how many auditable events you have lost Audit device limitations 1 You cannot add filtering to an existing audit device 1 You can configure filtering when enabling one of the following supported audit device types file vault docs audit file socket vault docs audit socket syslog vault docs audit syslog 1 You can only designate one auditing fallback device Filtering and test messages By default Vault sends a test message to the audit device when you enable it Depending on how you configure your filters the default test message may fail the predicate expression and not write to the new device You can determine whether the test message should appear in the sink for the newly enabled audit device based on the following property table which are common to all default test messages Property Value mount point empty mount type empty namespace empty operation update path sys audit test filter properties for audit devices Filters can only reference the following properties of an audit entry Property Example Description mount point mount point auth oidc Log all entries for the auth oidc mount point mount type mount type kv v2 Log all entries from kv v2 plugins namespace namespace admin Log all entries not in the admin namespace operation operation read Log all read operations path path auth approle login Log all activity against the AppRole login path Tip title Root namespaces are unnamed Non root namespace paths must end with a trailing slash to match correctly But the root namespace does not have a path and only matches to an empty string To match to the root namespace in your filter use For example namespace matches any audited request not in the root namespace Tip A practical example Assume you already have an audit file called vault audit log but you want to filter your audit entries and persist all the key value kv type events to a specific audit log file called kv audit log To filter the events 1 Enable a file audit device with a mount type filter shell session vault audit enable path kv only file filter mount type kv file path logs kv audit log 1 Enable a fallback device shell session vault audit enable path my fallback description fallback device file fallback true file path tmp kv audit fallback log 1 Confirm the audit devices are enabled shell session vault audit list detailed 1 Enable a new kv secrets engine called my kv shell session vault secrets enable path my kv kv v2 1 Write secret data to the kv engine shell session vault kv put mount my kv my secret the value always angry The var kv audit log now includes four entries in total the command request that enabled my kv the response entry from enabling my kv the command request that wrote a secret to my kv the response entry from writing the secret to my kv The fallback device captured entries for the other commands And the original audit file vault audit log continues to capture all audit events |
vault page title Check for Merkle tree corruption Check for Merkle tree corruption include alerts enterprise only mdx Learn how to check your Vault Enterprise cluster data for corruption in the Merkle trees used for replication layout docs | ---
layout: docs
page_title: Check for Merkle tree corruption
description: >-
Learn how to check your Vault Enterprise cluster data for corruption in the Merkle trees used for replication.
---
# Check for Merkle tree corruption
@include 'alerts/enterprise-only.mdx'
Vault Enterprise replication uses Merkle trees to keep the cluster state, and rolls cluster state into a Merkle root hash. When data is updated or removed in a cluster, the Merkle tree is also updated. In certain circumstances detailed later in this document, the Merkle tree can become corrupted.
## Types of corruption
Merkle tree corruption can occur at different points in the tree:
- Composite root corruption
- Subtree root corruption
- Page and subpage corruption
## Diagnose Merkle tree corruption
If you run Vault Enterprise versions 1.15.0+, 1.14.3+ or 1.13.7+, you can use the [/sys/replication/merkle-check API](/vault/api-docs/system/replication#sys-replication-merkle-check) endpoint to help determine if your cluster is encountering Merkle tree corruption. In the following sections, you'll learn about some of the details of symptoms and corruption causes which the merkle-check endpoint can detect.
<Note>
Keep in mind that the merkle-check endpoint cannot detect every way in which a Merkle tree could be corrupted.
</Note>
You'll also learn how to query the merkle-check endpoint and interpret its output. Finally, you'll learn about some Vault CLI commands which can help you diagnose corruption.
## Consecutive Merkle difference and synchronization loop
One indication of potential Merkle tree corruption occurs when Vault logs display consecutive Merkle difference and synchronization (merkle-diff and merkle-sync) operations without lasting resolution to streaming write-ahead logs (WALs).
A known cause for this symptom is the occurrence of a split brain situation within a High Availability (HA) Vault cluster. In this case, the leader loses leadership during which it writes data to the storage. Meanwhile, a new leader is elected and reads or writes data that the old leader is mutating. During leadership transfer, an old leader can write data which becomes lost and results in an inconsistent Merkle tree state.
The two most common symptoms which can potentially indicate a split brain issue are detailed in the following sections.
## Merkle difference results in no delta
A merkle-diff operation resulting in no delta indicates conflicting Merkle tree pages. Despite the two clusters holding exactly the same data in both trees, their root hashes do not match.
The following example log shows entries from a performance replication secondary indicating the issue resulting from a corrupted tree, and no writes on the primary since the previous merkle-sync operation.
<CodeBlockConfig hideClipboard>
```plaintext
vault [INFO] .perf-sec.core0.core: non-matching guard, exiting
vault [TRACE] .perf-sec.core0.core: finished client WAL streaming
vault [INFO] .perf-sec.core0.replication: no matching WALs available
vault [DEBUG] .perf-sec.core0.replication: starting merkle diff
vault [TRACE] .perf-sec.core0.core: wal context done
vault [TRACE] .perf-sec.core0.core: checking conflicting pages
vault [TRACE] .perf-pri.core0.core: serving conflicting pages
vault [DEBUG] .perf-pri.core0.replication.index.perf: creating merkle state snapshot: generation=4
vault [DEBUG] .perf-pri.core0.replication.index.perf: removing state snapshot from cache: generation=4
vault [INFO] .perf-sec.core0.replication: requesting WAL stream: guard=8acf94ac
vault [TRACE] .perf-sec.core0.core: starting client WAL streaming
vault [TRACE] .perf-sec.core0.core: receiving WALs
vault [TRACE] .perf-pri.core0.core: starting serving WALs: clientID=e16930a6-7d24-6924-41fe-aa8beb90b1b2
vault [TRACE] .perf-pri.core0.core: streaming from log shipper done: clientID=e16930a6-7d24-6924-41fe-aa8beb90b1b2
vault [TRACE] .perf-pri.core0.core: internal wal stream stop channel fired: clientID=e16930a6-7d24-6924-41fe-aa8beb90b1b2
vault [TRACE] .perf-pri.core0.core: stopping serving WALs: clientID=e16930a6-7d24-6924-41fe-aa8beb90b1b2
vault [INFO] .perf-sec.core0.core: non-matching guard, exiting
vault [TRACE] .perf-sec.core0.core: finished client WAL streaming
vault [INFO] .perf-sec.core0.replication: no matching WALs available
vault [TRACE] .perf-sec.core0.core: wal context done
vault [DEBUG] .perf-sec.core0.replication: starting merkle diff
vault [TRACE] .perf-sec.core0.core: checking conflicting pages
vault [TRACE] .perf-pri.core0.core: serving conflicting pages
```
</CodeBlockConfig>
In the example log output, the performance secondary cluster's finite state machine (FSM) is entering merkle-diff mode, in which it tries to fetch conflicting pages from the primary cluster. The diff result is empty, indicated by an immediate switch to the stream-wals mode and **skipping the merkle-sync operation**.
Further into the log, the performance secondary cluster immediately goes into the merkle-diff mode again trying to reconcile the discrepancies of its Merkle tree with the primary cluster. This loop goes on without resolution due to the Merkle tree corruption.
### Non resolving merkle-sync
When a diff operation reveals conflicting data and the sync operation fetches them, but Vault still cannot enter lasting streaming WALs mode afterwards, this indicates a non-matching Merkle roots condition.
In the following server log snippet, merkle-diff returns a non-empty list of page conflicts and merkle-sync fetches those keys. The FSM then transitions to the stream-wals state. Immediately after this transition, the FSM transitions to the merkle-diff again, and returns a **non-matching guard** error.
<CodeBlockConfig hideClipboard>
```plaintext
vault [INFO] perf-sec.core0.core: non-matching guard, exiting
vault [TRACE] perf-sec.core0.core: finished client WAL streaming
vault [INFO] perf-sec.core0.replication: no matching WALs available
vault [TRACE] perf-sec.core0.core: wal context done
vault [DEBUG] perf-sec.core0.replication: transitioning state: state=merkle-diff
vault [DEBUG] perf-sec.core0.replication: starting merkle diff
vault [TRACE] perf-sec.core0.core: checking conflicting pages
vault [TRACE] perf-pri.core0.core: serving conflicting pages
vault [DEBUG] perf-pri.core0.replication.index.perf: creating merkle state snapshot: generation=3
vault [TRACE] perf-sec.core0.core: fetching subpage hashes
vault [TRACE] perf-pri.core0.core: serving subpage hashes
vault [DEBUG] perf-pri.core0.replication.index.perf: removing state snapshot from cache: generation=3
vault [DEBUG] perf-sec.core0.replication: transitioning state: state=merkle-sync
vault [DEBUG] perf-sec.core0.replication: waiting for operations to complete before merkle sync
vault [DEBUG] perf-sec.core0.replication: starting merkle sync: num_conflict_keys=4
vault [DEBUG] perf-sec.core0.replication: merkle sync debug info: local_keys=[] remote_keys=[] conflicting_keys=["logical/67bf7b33-734e-f909-86e5-a7e69af0979f/junk9", "logical/67bf7b33-734e-f909-86e5-a7e69af0979f/junk7", "logical/67bf7b33-734e-f909-86e5-a7e69af0979f/junk8", "logical/67bf7b33-734e-f909-86e5-a7e69af0979f/junk6"]
vault [DEBUG] perf-sec.core0.replication: transitioning state: state=stream-wals
vault [INFO] perf-sec.core0.replication: requesting WAL stream: guard=0c556858
vault [TRACE] perf-sec.core0.core: starting client WAL streaming
vault [TRACE] perf-sec.core0.core: receiving WALs
vault [TRACE] perf-pri.core0.core: starting serving WALs: clientID=6afbce30-67c5-bb15-6eda-001140d33275
vault [TRACE] perf-pri.core0.core: streaming from log shipper done: clientID=6afbce30-67c5-bb15-6eda-001140d33275
vault [TRACE] perf-pri.core0.core: internal wal stream stop channel fired: clientID=6afbce30-67c5-bb15-6eda-001140d33275
vault [TRACE] perf-pri.core0.core: stopping serving WALs: clientID=6afbce30-67c5-bb15-6eda-001140d33275
vault [INFO] perf-sec.core0.core: non-matching guard, exiting
vault [TRACE] perf-sec.core0.core: finished client WAL streaming
vault [INFO] perf-sec.core0.replication: no matching WALs available
vault [DEBUG] perf-sec.core0.replication: transitioning state: state=merkle-diff
vault [DEBUG] perf-sec.core0.replication: starting merkle diff
vault [TRACE] perf-sec.core0.core: wal context done
vault [TRACE] perf-sec.core0.core: checking conflicting pages
vault [TRACE] perf-pri.core0.core: serving conflicting pages
```
</CodeBlockConfig>
## Use the merkle-check endpoint
The following examples show how you can use curl to query the merkle-check endpoint.
<Note>
The merkle-check endpoint is authenticated. You need a Vault token with the capabilities detailed in the [endpoint documentation](/vault/api-docs/system/replication#sys-replication-merkle-check) to query the endpoint.
</Note>
### Check the primary cluster
```shell-session
$ curl $VAULT_ADDR/v1/sys/replication/merkle-check
```
Example output:
<CodeBlockConfig hideClipboard>
```json
{
"request_id": "d4b2ad1a-6e5f-7f9e-edfe-558eb89a40e6",
"lease_id": "",
"lease_duration": 0,
"renewable": false,
"data": {
"merkle_corruption_report": {
"corrupted_root": false,
"corrupted_tree_map": {
"1": {
"corrupted_index_tuples_map": {
"5": {
"corrupted": false,
"subpages": [
28
]
}
},
"corrupted_subtree_root": false,
"root_hash": "DyGc6rQTV9XgyNSff3zimhi3FJM=",
"tree_type": "replicated"
},
"2": {
"corrupted_index_tuples_map": null,
"corrupted_subtree_root": false,
"root_hash": "EXmRTdfYCZTm5i9wLef9RQqyLCw=",
"tree_type": "local"
}
},
"last_corruption_check_epoch": "2023-09-11T11:25:59.44956-07:00"
}
}
}
```
</CodeBlockConfig>
The `merkle_corruption_report` stanza provides information about Merkle tree corruption.
When the composite tree or subtree root hashes are corrupted, Vault sets the `corrupted_root` and `corrupted_subtree_root` field values to **true**. Vault sets the field values to true when it detects corruption in the both root hashes of the composite tree, and the subtree.
The `corrupted_tree_map` field identifies any corruption in the subtrees, including replicated and local subtrees. The replicated tree is indexed by number 1 in the map and the local tree is indexed by number `2`. The `tree_type` sub-field also shows which tree contains a corrupted page. The replicated subtree stores the information that is replicated to either a disaster recovery or a performance replication secondary cluster.
It contains replicated items like vault configuration, secrets engine and auth method configuration, and KV secrets. The local subtree stores information relevant to the local cluster such as local mounts, leases, and tokens.
In the event of corruption within a page or a subpage of a tree, the `corrupted_index_tuples_map` includes the page number along with a list of corrupted subpage numbers. If the page hash is corrupted, the `corrupted` field is set to true, otherwise, it sets to false.
<CodeBlockConfig hideClipboard>
```json
{
"request_id": "d4b2ad1a-6e5f-7f9e-edfe-558eb89a40e6",
"lease_id": "",
"lease_duration": 0,
"renewable": false,
"data": {
"merkle_corruption_report": {
"corrupted_root": false,
"corrupted_tree_map": {
"1": {
"corrupted_index_tuples_map": {
"5": {
"corrupted": false,
"subpages": [
28
]
}
},
"corrupted_subtree_root": false,
"root_hash": "DyGc6rQTV9XgyNSff3zimhi3FJM=",
"tree_type": "replicated"
},
"2": {
"corrupted_index_tuples_map": null,
"corrupted_subtree_root": false,
"root_hash": "EXmRTdfYCZTm5i9wLef9RQqyLCw=",
"tree_type": "local"
}
},
"last_corruption_check_epoch": "2023-09-11T11:25:59.44956-07:00"
}
}
}
```
</CodeBlockConfig>
In the example output, the replicated tree is corrupted on subpage 28 of page 5 only. This means that the page hash, the replicated tree hash, and the composite tree hash are all correct.
### Check a secondary cluster
The Merkle check endpoint prints information about the corruption status of the Merkle tree on a disaster recovery (DR) secondary cluster. You need a [DR operation token](/vault/tutorials/enterprise/disaster-recovery#dr-operation-token-strategy) to access this endpoint. Here is an example `curl` command that demonstrates querying the endpoint.
```shell-session
$ curl $VAULT_ADDR/v1/sys/replication/dr/secondary/merkle-check
```
## CLI commands for diagnosing corruption
You need to check four layers for potential Merkle tree corruption: composite root, subtree roots, page hash, and subpage hash. You can use the following example Vault CLI commands in combination with the [jq](https://jqlang.github.io/jq/) tool to diagnose corruption.
### Composite root corruption
Use a Vault command like the following to check if the composite tree root is corrupted:
```shell-session
$ vault write sys/replication/merkle-check \
-format=json | jq -r '.data.merkle_corruption_report.corrupted_root'
```
If the response is true, the Merkle tree is corrupted at the composite root.
### Subtree root corruption
Use a Vault command like the following to check if the subtree root is corrupted:
```shell-session
$ vault write sys/replication/merkle-check -format=json \
| jq -r '.data.merkle_corruption_report.corrupted_tree_map[] | select(.corrupted_subtree_root==true)'
```
If the response is true, the sub-tree is corrupted.
### Page or subpage corruption
Use a Vault command like the following to check if page or subpage is corrupted:
```shell-session
$ vault write sys/replication/merkle-check -format=json \
| jq -r '.data.merkle_corruption_report.corrupted_tree_map[] | select(.corrupted_index_tuples_map!=null)'
```
If the response is a non-empty map, then at least one page or subpage is corrupted.
The following is an example of a non-empty map in the command output:
<CodeBlockConfig hideClipboard>
```json
{
"corrupted_index_tuples_map": {
"23": {
"corrupted": false,
"subpages": [
234
]
}
},
"corrupted_subtree_root": false,
"root_hash": "A5uW54VXDM4jUryDkxN8Vauk8kE=",
"tree_type": "replicated"
}
```
</CodeBlockConfig>
In this example, page number 23 is returned, however, as the `corrupted` field is false, which means that the page is not corrupted; however, subpage 234 is listed as a corrupted subpage.
To locate a corrupted subpage, Vault needs to also note its parent page, as each page contains 256 subpages and these indexes are repeated for every page. It's possible that only the full page is corrupted without having any corrupted subpages. In that case, the `corrupted` field in the page map is true, and no subpages are listed.
## Caveats and considerations
We generally recommend that you consult the merkle-check endpoint before reindexing to ensure the process will be useful as reindexing can be time-consuming and lead to downtime. | vault | layout docs page title Check for Merkle tree corruption description Learn how to check your Vault Enterprise cluster data for corruption in the Merkle trees used for replication Check for Merkle tree corruption include alerts enterprise only mdx Vault Enterprise replication uses Merkle trees to keep the cluster state and rolls cluster state into a Merkle root hash When data is updated or removed in a cluster the Merkle tree is also updated In certain circumstances detailed later in this document the Merkle tree can become corrupted Types of corruption Merkle tree corruption can occur at different points in the tree Composite root corruption Subtree root corruption Page and subpage corruption Diagnose Merkle tree corruption If you run Vault Enterprise versions 1 15 0 1 14 3 or 1 13 7 you can use the sys replication merkle check API vault api docs system replication sys replication merkle check endpoint to help determine if your cluster is encountering Merkle tree corruption In the following sections you ll learn about some of the details of symptoms and corruption causes which the merkle check endpoint can detect Note Keep in mind that the merkle check endpoint cannot detect every way in which a Merkle tree could be corrupted Note You ll also learn how to query the merkle check endpoint and interpret its output Finally you ll learn about some Vault CLI commands which can help you diagnose corruption Consecutive Merkle difference and synchronization loop One indication of potential Merkle tree corruption occurs when Vault logs display consecutive Merkle difference and synchronization merkle diff and merkle sync operations without lasting resolution to streaming write ahead logs WALs A known cause for this symptom is the occurrence of a split brain situation within a High Availability HA Vault cluster In this case the leader loses leadership during which it writes data to the storage Meanwhile a new leader is elected and reads or writes data that the old leader is mutating During leadership transfer an old leader can write data which becomes lost and results in an inconsistent Merkle tree state The two most common symptoms which can potentially indicate a split brain issue are detailed in the following sections Merkle difference results in no delta A merkle diff operation resulting in no delta indicates conflicting Merkle tree pages Despite the two clusters holding exactly the same data in both trees their root hashes do not match The following example log shows entries from a performance replication secondary indicating the issue resulting from a corrupted tree and no writes on the primary since the previous merkle sync operation CodeBlockConfig hideClipboard plaintext vault INFO perf sec core0 core non matching guard exiting vault TRACE perf sec core0 core finished client WAL streaming vault INFO perf sec core0 replication no matching WALs available vault DEBUG perf sec core0 replication starting merkle diff vault TRACE perf sec core0 core wal context done vault TRACE perf sec core0 core checking conflicting pages vault TRACE perf pri core0 core serving conflicting pages vault DEBUG perf pri core0 replication index perf creating merkle state snapshot generation 4 vault DEBUG perf pri core0 replication index perf removing state snapshot from cache generation 4 vault INFO perf sec core0 replication requesting WAL stream guard 8acf94ac vault TRACE perf sec core0 core starting client WAL streaming vault TRACE perf sec core0 core receiving WALs vault TRACE perf pri core0 core starting serving WALs clientID e16930a6 7d24 6924 41fe aa8beb90b1b2 vault TRACE perf pri core0 core streaming from log shipper done clientID e16930a6 7d24 6924 41fe aa8beb90b1b2 vault TRACE perf pri core0 core internal wal stream stop channel fired clientID e16930a6 7d24 6924 41fe aa8beb90b1b2 vault TRACE perf pri core0 core stopping serving WALs clientID e16930a6 7d24 6924 41fe aa8beb90b1b2 vault INFO perf sec core0 core non matching guard exiting vault TRACE perf sec core0 core finished client WAL streaming vault INFO perf sec core0 replication no matching WALs available vault TRACE perf sec core0 core wal context done vault DEBUG perf sec core0 replication starting merkle diff vault TRACE perf sec core0 core checking conflicting pages vault TRACE perf pri core0 core serving conflicting pages CodeBlockConfig In the example log output the performance secondary cluster s finite state machine FSM is entering merkle diff mode in which it tries to fetch conflicting pages from the primary cluster The diff result is empty indicated by an immediate switch to the stream wals mode and skipping the merkle sync operation Further into the log the performance secondary cluster immediately goes into the merkle diff mode again trying to reconcile the discrepancies of its Merkle tree with the primary cluster This loop goes on without resolution due to the Merkle tree corruption Non resolving merkle sync When a diff operation reveals conflicting data and the sync operation fetches them but Vault still cannot enter lasting streaming WALs mode afterwards this indicates a non matching Merkle roots condition In the following server log snippet merkle diff returns a non empty list of page conflicts and merkle sync fetches those keys The FSM then transitions to the stream wals state Immediately after this transition the FSM transitions to the merkle diff again and returns a non matching guard error CodeBlockConfig hideClipboard plaintext vault INFO perf sec core0 core non matching guard exiting vault TRACE perf sec core0 core finished client WAL streaming vault INFO perf sec core0 replication no matching WALs available vault TRACE perf sec core0 core wal context done vault DEBUG perf sec core0 replication transitioning state state merkle diff vault DEBUG perf sec core0 replication starting merkle diff vault TRACE perf sec core0 core checking conflicting pages vault TRACE perf pri core0 core serving conflicting pages vault DEBUG perf pri core0 replication index perf creating merkle state snapshot generation 3 vault TRACE perf sec core0 core fetching subpage hashes vault TRACE perf pri core0 core serving subpage hashes vault DEBUG perf pri core0 replication index perf removing state snapshot from cache generation 3 vault DEBUG perf sec core0 replication transitioning state state merkle sync vault DEBUG perf sec core0 replication waiting for operations to complete before merkle sync vault DEBUG perf sec core0 replication starting merkle sync num conflict keys 4 vault DEBUG perf sec core0 replication merkle sync debug info local keys remote keys conflicting keys logical 67bf7b33 734e f909 86e5 a7e69af0979f junk9 logical 67bf7b33 734e f909 86e5 a7e69af0979f junk7 logical 67bf7b33 734e f909 86e5 a7e69af0979f junk8 logical 67bf7b33 734e f909 86e5 a7e69af0979f junk6 vault DEBUG perf sec core0 replication transitioning state state stream wals vault INFO perf sec core0 replication requesting WAL stream guard 0c556858 vault TRACE perf sec core0 core starting client WAL streaming vault TRACE perf sec core0 core receiving WALs vault TRACE perf pri core0 core starting serving WALs clientID 6afbce30 67c5 bb15 6eda 001140d33275 vault TRACE perf pri core0 core streaming from log shipper done clientID 6afbce30 67c5 bb15 6eda 001140d33275 vault TRACE perf pri core0 core internal wal stream stop channel fired clientID 6afbce30 67c5 bb15 6eda 001140d33275 vault TRACE perf pri core0 core stopping serving WALs clientID 6afbce30 67c5 bb15 6eda 001140d33275 vault INFO perf sec core0 core non matching guard exiting vault TRACE perf sec core0 core finished client WAL streaming vault INFO perf sec core0 replication no matching WALs available vault DEBUG perf sec core0 replication transitioning state state merkle diff vault DEBUG perf sec core0 replication starting merkle diff vault TRACE perf sec core0 core wal context done vault TRACE perf sec core0 core checking conflicting pages vault TRACE perf pri core0 core serving conflicting pages CodeBlockConfig Use the merkle check endpoint The following examples show how you can use curl to query the merkle check endpoint Note The merkle check endpoint is authenticated You need a Vault token with the capabilities detailed in the endpoint documentation vault api docs system replication sys replication merkle check to query the endpoint Note Check the primary cluster shell session curl VAULT ADDR v1 sys replication merkle check Example output CodeBlockConfig hideClipboard json request id d4b2ad1a 6e5f 7f9e edfe 558eb89a40e6 lease id lease duration 0 renewable false data merkle corruption report corrupted root false corrupted tree map 1 corrupted index tuples map 5 corrupted false subpages 28 corrupted subtree root false root hash DyGc6rQTV9XgyNSff3zimhi3FJM tree type replicated 2 corrupted index tuples map null corrupted subtree root false root hash EXmRTdfYCZTm5i9wLef9RQqyLCw tree type local last corruption check epoch 2023 09 11T11 25 59 44956 07 00 CodeBlockConfig The merkle corruption report stanza provides information about Merkle tree corruption When the composite tree or subtree root hashes are corrupted Vault sets the corrupted root and corrupted subtree root field values to true Vault sets the field values to true when it detects corruption in the both root hashes of the composite tree and the subtree The corrupted tree map field identifies any corruption in the subtrees including replicated and local subtrees The replicated tree is indexed by number 1 in the map and the local tree is indexed by number 2 The tree type sub field also shows which tree contains a corrupted page The replicated subtree stores the information that is replicated to either a disaster recovery or a performance replication secondary cluster It contains replicated items like vault configuration secrets engine and auth method configuration and KV secrets The local subtree stores information relevant to the local cluster such as local mounts leases and tokens In the event of corruption within a page or a subpage of a tree the corrupted index tuples map includes the page number along with a list of corrupted subpage numbers If the page hash is corrupted the corrupted field is set to true otherwise it sets to false CodeBlockConfig hideClipboard json request id d4b2ad1a 6e5f 7f9e edfe 558eb89a40e6 lease id lease duration 0 renewable false data merkle corruption report corrupted root false corrupted tree map 1 corrupted index tuples map 5 corrupted false subpages 28 corrupted subtree root false root hash DyGc6rQTV9XgyNSff3zimhi3FJM tree type replicated 2 corrupted index tuples map null corrupted subtree root false root hash EXmRTdfYCZTm5i9wLef9RQqyLCw tree type local last corruption check epoch 2023 09 11T11 25 59 44956 07 00 CodeBlockConfig In the example output the replicated tree is corrupted on subpage 28 of page 5 only This means that the page hash the replicated tree hash and the composite tree hash are all correct Check a secondary cluster The Merkle check endpoint prints information about the corruption status of the Merkle tree on a disaster recovery DR secondary cluster You need a DR operation token vault tutorials enterprise disaster recovery dr operation token strategy to access this endpoint Here is an example curl command that demonstrates querying the endpoint shell session curl VAULT ADDR v1 sys replication dr secondary merkle check CLI commands for diagnosing corruption You need to check four layers for potential Merkle tree corruption composite root subtree roots page hash and subpage hash You can use the following example Vault CLI commands in combination with the jq https jqlang github io jq tool to diagnose corruption Composite root corruption Use a Vault command like the following to check if the composite tree root is corrupted shell session vault write sys replication merkle check format json jq r data merkle corruption report corrupted root If the response is true the Merkle tree is corrupted at the composite root Subtree root corruption Use a Vault command like the following to check if the subtree root is corrupted shell session vault write sys replication merkle check format json jq r data merkle corruption report corrupted tree map select corrupted subtree root true If the response is true the sub tree is corrupted Page or subpage corruption Use a Vault command like the following to check if page or subpage is corrupted shell session vault write sys replication merkle check format json jq r data merkle corruption report corrupted tree map select corrupted index tuples map null If the response is a non empty map then at least one page or subpage is corrupted The following is an example of a non empty map in the command output CodeBlockConfig hideClipboard json corrupted index tuples map 23 corrupted false subpages 234 corrupted subtree root false root hash A5uW54VXDM4jUryDkxN8Vauk8kE tree type replicated CodeBlockConfig In this example page number 23 is returned however as the corrupted field is false which means that the page is not corrupted however subpage 234 is listed as a corrupted subpage To locate a corrupted subpage Vault needs to also note its parent page as each page contains 256 subpages and these indexes are repeated for every page It s possible that only the full page is corrupted without having any corrupted subpages In that case the corrupted field in the page map is true and no subpages are listed Caveats and considerations We generally recommend that you consult the merkle check endpoint before reindexing to ensure the process will be useful as reindexing can be time consuming and lead to downtime |
vault replicated across clusters to support horizontally scaling and disaster Vault Enterprise has support for Replication allowing critical data to be recovery workloads page title Replication Vault Enterprise Vault Enterprise replication layout docs | ---
layout: docs
page_title: Replication - Vault Enterprise
description: >-
Vault Enterprise has support for Replication, allowing critical data to be
replicated across clusters to support horizontally scaling and disaster
recovery workloads.
---
# Vault Enterprise replication
@include 'alerts/enterprise-and-hcp.mdx'
## Overview
Many organizations have infrastructure that spans multiple datacenters. Vault
provides the critical services of identity management, secrets storage, and
policy management. This functionality is expected to be highly available and
to scale as the number of clients and their functional needs increase; at the
same time, operators would like to ensure that a common set of policies are
enforced globally, and a consistent set of secrets and keys are exposed to
applications that need to interoperate.
Vault replication addresses both of these needs in providing consistency,
scalability, and highly-available disaster recovery.
<Note title="Storage backend requirement">
Using replication requires a storage backend that supports transactional
updates, such as [Integrated Storage](/vault/docs/concepts/integrated-storage)
or Consul.
</Note>
## Architecture
The core unit of Vault replication is a **cluster**, which is comprised of a
collection of Vault nodes (an active and its corresponding HA nodes). Multiple Vault
clusters communicate in a one-to-many near real-time flow.
Replication operates on a leader/follower model, wherein a leader cluster (known as a
**primary**) is linked to a series of follower **secondary** clusters. The primary
cluster acts as the system of record and asynchronously replicates most Vault data.
All communication between primaries and secondaries is end-to-end encrypted
with mutually-authenticated TLS sessions, setup via replication tokens which are
exchanged during bootstrapping.
## Replicated data
What data is replicated between the primary and secondary depends on the type of
replication that is configured between the primary and secondary. These types
of relationships are either **disaster recovery** or
**performance replication** relationships.

The following table shows a capability comparison between Disaster Recovery
and Performance Replication.
| Capability | Disaster Recovery | Performance Replication |
| -------------------------------------------------------------------------------------------------------------------- | ----------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Mirrors the configuration of a primary cluster | Yes | Yes |
| Mirrors the configuration of a primary cluster’s backends (i.e., auth methods, secrets engines, audit devices, etc.) | Yes | Yes |
| Mirrors the tokens and leases for applications and users interacting with the primary cluster | Yes | No. Secondaries keep track of their own tokens and leases. When the secondary is promoted, applications must reauthenticate and obtain new leases from the newly-promoted primary. |
| Allows the secondary cluster to handle client requests | No | Yes |
Everything written to storage is classified into one of three categories:
* `replicated` (or "shared"): all downstream clusters receive it
* `local`: only downstream disaster recovery clusters receive it
* `ignored`: not replicated to downstream clusters at all
When mounting a secret engine or auth method, you can choose whether to make it a
local mount or a shared mount.
Shared mounts (the default) usually replicate all their data to performance secondaries, but
they can choose to designate specific storage paths as local. For example, PKI mounts
store certificates locally. If you query the roles on a shared PKI mount, you'll see
the same result for that mount when you send the query to either the performance primary or
secondary, but if you list stored certs, you'll see different values.
Local mounts replicate their data only to disaster recovery secondaries. A local mount
created on a performance primary isn't visible at all to its performance secondaries.
Local mounts can also be created on performance secondaries, in which case they aren't visible
to the performance primary.
There exist other storage entries that aren't mount specific. For example, the Integrated
Storage Autopilot configuration is an `ignored` storage entry, which allows for disaster recovery
secondaries to have a different configuration than their primary. Tokens and leases are written to
`local` storage entries.
## Performance replication
@include 'alerts/enterprise-and-hcp.mdx'
In Performance Replication, secondaries keep track of their own tokens and leases
but share the underlying configuration, policies, and supporting secrets (KV values,
encryption keys for `transit`, etc).
If a user action would modify underlying shared state, the secondary forwards the request
to the primary to be handled; this is transparent to the client. In practice, most
high-volume workloads (reads in the `kv` backend, encryption/decryption operations
in `transit`, etc.) can be satisfied by the local secondary, allowing Vault to scale
relatively horizontally with the number of secondaries rather than vertically as
in the past.
<Note title="Performance replication does not include automated integrated storage snapshots">
Vault does not replicate automated integrated storage snapshots as a part of
performance replication.
You must explicitly configure each of the primary and secondary performance
clusters to create individual automated snapshots replicas.
</Note>
### Paths filter
The primary cluster's mount configuration gets replicated across its secondary
clusters when you enable Performance Replication. In some cases, you may not
want all data to be replicated. For example, your primary cluster is in the EU
region, and you have a secondary cluster outside of the EU region. [General Data
Protection Regulation (GDPR)](https://gdpr.eu/) requires that personally
identifiable data not be physically transferred to locations outside the
European Union unless the region or country has an equal rigor of data
protection regulation as the EU.
To comply with GDPR, leverage Vault's **paths filter** feature to abide by
data movements and sovereignty regulations while ensuring performance access
across geographically distributed regions.
You can set filters based on the mount path of the secrets engines and
namespaces.

In the above example, the `EU_GDPR_data/` path and `office_FR` namespace will
not be replicated and remain only on the primary cluster.
On a similar note, if you want to avoid secondary cluster's data to be
replicated, you can mark those secrets engine and/or auth methods **local**.
Local secrets engines and auth methods are not replicated or removed by
replication.
**Example:** When you enable a secrets engine on a secondary cluster, use the
`-local` flag.
```shell-session
$ vault secrets enable -local -path=us_west_data kv-v2
```
<Highlight title="Tutorials">
Refer to the _manage replicated mounts_ section in the [Set up performance
replication](/vault/tutorials/enterprise/performance-replication#manage-replicated-mounts)
tutorial to learn how to specify the mounts to allow or deny data replication.
</Highlight>
## Disaster recovery (DR) replication
In disaster recovery (or DR) replication, secondaries share the same underlying configuration,
policy, and supporting secrets (KV values, encryption keys for `transit`, etc) infrastructure
as the primary. They also share the same token and lease infrastructure as the primary, as
they are designed to allow for continuous operations with applications connecting to the
original primary on the election of the DR secondary.
DR is designed to be a mechanism to protect against catastrophic failure of entire clusters.
They do not forward service read or write requests until they are elected and become a new primary.
-> **Note**: Unlike with Performance Replication, local secret engines, auth methods and audit devices are replicated to a DR secondary.
For more information on the capabilities of performance and disaster recovery replication, see the Vault Replication [API Documentation](/vault/api-docs/system/replication).
## Primary and secondary cluster compatibility
### Storage engines
There is no requirement that both clusters use the same storage engine.
### Seals
There is no requirement that both clusters use the same seal type, but see
[sealwrap](/vault/docs/enterprise/sealwrap#seal-wrap-and-replication) for the full
details.
Also note that enabling replication will modify the secondary seal.
If the secondary uses an auto seal, its recovery configuration and keys
will be replaced; if it uses shamir, its seal configuration and unseal
keys will be replaced. Here seal/recovery configuration means the number of
seal/recovery key fragments and the required threshold of those
fragments.
| Primary Seal | Secondary Seal (before) | Secondary Seal (after) | Secondary Recovery Key (after) | Impact on Secondary of Enabling Replication |
|--------------|-------------------------|----------------------------------------|--------------------------------|-----------------------------------------------------------------------------|
| Shamir | Shamir | Primary's shamir config & unseal keys | N/A | Seal config and unseal keys replaced with primary's |
| Shamir | Auto | Unchanged | Receives primary seal | Seal recovery config and keys replaced with primary's seal config and keys |
| Auto | Auto | Unchanged | Receives primary recovery | Seal recovery config and recovery keys replaced with primary's |
| Auto | Shamir | Receives primary recovery | N/A | Seal config and keys replaced with primary's recovery seal config and keys |
<Note>
Vault clusters configured with
[auto-unseal](/vault/docs/concepts/seal#auto-unseal) have recovery keys instead
of unseal keys.
</Note>
### Vault versions
Vault changes are designed and tested to ensure that the
[upgrade instructions](/vault/docs/upgrading#replication-installations) are viable, i.e.
that a secondary can run a newer Vault version than its primary.
That said, we do not recommend running replicated Vault clusters with different
versions any longer than necessary to perform the upgrade.
## Internals
Details on the internal design of the replication feature can be found in the
[replication internals](/vault/docs/internals/replication) document.
## Security model
Vault is trusted all over the world to keep secrets safe. As such, we have put
extreme focus to detail to our replication model as well.
### Primary/Secondary communication
When a cluster is marked as the primary it generates a self-signed CA
certificate. On request, and given a user-specified identifier, the primary
uses this CA certificate to generate a private key and certificate and packages
these, along with some other information, into a replication bootstrapping
bundle, a.k.a. a secondary activation token. The certificate is used to perform
TLS mutual authentication between the primary and that secondary.
This CA certificate is never shared with secondaries, and no secondary ever has
access to any other secondary’s certificate. In practice this means that
revoking a secondary’s access to the primary does not allow it continue
replication with any other machine; it also means that if a primary goes down,
there is full administrative control over which cluster becomes primary. An
attacker cannot spoof a secondary into believing that a cluster the attacker
controls is the new primary without also being able to administratively direct
the secondary to connect by giving it a new bootstrap package (which is an
ACL-protected call).
Vault makes use of Application Layer Protocol Negotiation on its cluster port.
This allows the same port to handle both request forwarding and replication,
even while keeping the certificate root of trust and feature set different.
### Secondary activation tokens
A secondary activation token is an extremely sensitive item and as such is
protected via response wrapping. Experienced Vault users will note that the
wrapping format for replication bootstrap packages is different from normal
response wrapping tokens: it is a signed JWT. This allows the replication token
to carry the redirect address of the primary cluster as part of the token. In
most cases this means that simply providing the token to a new secondary is
enough to activate replication, although this can also be overridden when the
token is provided to the secondary.
Secondary activation tokens should be treated like Vault root tokens. If
disclosed to a bad actor, that actor can gain access to all Vault data. It
should therefore be treated with utmost sensitivity. Like all
response-wrapping tokens, once the token is used successfully (in this case, to
activate a secondary) it is useless, so it is only necessary to safeguard it
from one machine to the next. Like with root tokens, HashiCorp recommends that
when a secondary activation token is live, there are multiple eyes on it from
generation until it is used.
Once a secondary is activated, its cluster information is stored safely behind
its encrypted barrier.
## Mutual TLS and load balancers
Vault generates its own certificates for cluster members. All replication traffic
uses the cluster port using these Vault-generated certificates after initial
bootstrapping. Because of this, the cluster traffic can NOT be terminated at the
cluster port at a load balancer level.
## Tutorial
Refer to the following tutorials replication setup and best practices:
- [Set up Performance Replication](/vault/tutorials/enterprise/performance-replication)
- [Disaster Recovery Replication Setup](/vault/tutorials/enterprise/disaster-recovery)
- [Monitoring Vault Replication](/vault/tutorials/monitoring/monitor-replication)
## API
The Vault replication component has a full HTTP API. Refer to the
[Vault Replication API](/vault/api-docs/system/replication) for more
details. | vault | layout docs page title Replication Vault Enterprise description Vault Enterprise has support for Replication allowing critical data to be replicated across clusters to support horizontally scaling and disaster recovery workloads Vault Enterprise replication include alerts enterprise and hcp mdx Overview Many organizations have infrastructure that spans multiple datacenters Vault provides the critical services of identity management secrets storage and policy management This functionality is expected to be highly available and to scale as the number of clients and their functional needs increase at the same time operators would like to ensure that a common set of policies are enforced globally and a consistent set of secrets and keys are exposed to applications that need to interoperate Vault replication addresses both of these needs in providing consistency scalability and highly available disaster recovery Note title Storage backend requirement Using replication requires a storage backend that supports transactional updates such as Integrated Storage vault docs concepts integrated storage or Consul Note Architecture The core unit of Vault replication is a cluster which is comprised of a collection of Vault nodes an active and its corresponding HA nodes Multiple Vault clusters communicate in a one to many near real time flow Replication operates on a leader follower model wherein a leader cluster known as a primary is linked to a series of follower secondary clusters The primary cluster acts as the system of record and asynchronously replicates most Vault data All communication between primaries and secondaries is end to end encrypted with mutually authenticated TLS sessions setup via replication tokens which are exchanged during bootstrapping Replicated data What data is replicated between the primary and secondary depends on the type of replication that is configured between the primary and secondary These types of relationships are either disaster recovery or performance replication relationships img replication overview png The following table shows a capability comparison between Disaster Recovery and Performance Replication Capability Disaster Recovery Performance Replication Mirrors the configuration of a primary cluster Yes Yes Mirrors the configuration of a primary cluster s backends i e auth methods secrets engines audit devices etc Yes Yes Mirrors the tokens and leases for applications and users interacting with the primary cluster Yes No Secondaries keep track of their own tokens and leases When the secondary is promoted applications must reauthenticate and obtain new leases from the newly promoted primary Allows the secondary cluster to handle client requests No Yes Everything written to storage is classified into one of three categories replicated or shared all downstream clusters receive it local only downstream disaster recovery clusters receive it ignored not replicated to downstream clusters at all When mounting a secret engine or auth method you can choose whether to make it a local mount or a shared mount Shared mounts the default usually replicate all their data to performance secondaries but they can choose to designate specific storage paths as local For example PKI mounts store certificates locally If you query the roles on a shared PKI mount you ll see the same result for that mount when you send the query to either the performance primary or secondary but if you list stored certs you ll see different values Local mounts replicate their data only to disaster recovery secondaries A local mount created on a performance primary isn t visible at all to its performance secondaries Local mounts can also be created on performance secondaries in which case they aren t visible to the performance primary There exist other storage entries that aren t mount specific For example the Integrated Storage Autopilot configuration is an ignored storage entry which allows for disaster recovery secondaries to have a different configuration than their primary Tokens and leases are written to local storage entries Performance replication include alerts enterprise and hcp mdx In Performance Replication secondaries keep track of their own tokens and leases but share the underlying configuration policies and supporting secrets KV values encryption keys for transit etc If a user action would modify underlying shared state the secondary forwards the request to the primary to be handled this is transparent to the client In practice most high volume workloads reads in the kv backend encryption decryption operations in transit etc can be satisfied by the local secondary allowing Vault to scale relatively horizontally with the number of secondaries rather than vertically as in the past Note title Performance replication does not include automated integrated storage snapshots Vault does not replicate automated integrated storage snapshots as a part of performance replication You must explicitly configure each of the primary and secondary performance clusters to create individual automated snapshots replicas Note Paths filter The primary cluster s mount configuration gets replicated across its secondary clusters when you enable Performance Replication In some cases you may not want all data to be replicated For example your primary cluster is in the EU region and you have a secondary cluster outside of the EU region General Data Protection Regulation GDPR https gdpr eu requires that personally identifiable data not be physically transferred to locations outside the European Union unless the region or country has an equal rigor of data protection regulation as the EU To comply with GDPR leverage Vault s paths filter feature to abide by data movements and sovereignty regulations while ensuring performance access across geographically distributed regions You can set filters based on the mount path of the secrets engines and namespaces Performance Replication primary img vault mount filter 7 png In the above example the EU GDPR data path and office FR namespace will not be replicated and remain only on the primary cluster On a similar note if you want to avoid secondary cluster s data to be replicated you can mark those secrets engine and or auth methods local Local secrets engines and auth methods are not replicated or removed by replication Example When you enable a secrets engine on a secondary cluster use the local flag shell session vault secrets enable local path us west data kv v2 Highlight title Tutorials Refer to the manage replicated mounts section in the Set up performance replication vault tutorials enterprise performance replication manage replicated mounts tutorial to learn how to specify the mounts to allow or deny data replication Highlight Disaster recovery DR replication In disaster recovery or DR replication secondaries share the same underlying configuration policy and supporting secrets KV values encryption keys for transit etc infrastructure as the primary They also share the same token and lease infrastructure as the primary as they are designed to allow for continuous operations with applications connecting to the original primary on the election of the DR secondary DR is designed to be a mechanism to protect against catastrophic failure of entire clusters They do not forward service read or write requests until they are elected and become a new primary Note Unlike with Performance Replication local secret engines auth methods and audit devices are replicated to a DR secondary For more information on the capabilities of performance and disaster recovery replication see the Vault Replication API Documentation vault api docs system replication Primary and secondary cluster compatibility Storage engines There is no requirement that both clusters use the same storage engine Seals There is no requirement that both clusters use the same seal type but see sealwrap vault docs enterprise sealwrap seal wrap and replication for the full details Also note that enabling replication will modify the secondary seal If the secondary uses an auto seal its recovery configuration and keys will be replaced if it uses shamir its seal configuration and unseal keys will be replaced Here seal recovery configuration means the number of seal recovery key fragments and the required threshold of those fragments Primary Seal Secondary Seal before Secondary Seal after Secondary Recovery Key after Impact on Secondary of Enabling Replication Shamir Shamir Primary s shamir config unseal keys N A Seal config and unseal keys replaced with primary s Shamir Auto Unchanged Receives primary seal Seal recovery config and keys replaced with primary s seal config and keys Auto Auto Unchanged Receives primary recovery Seal recovery config and recovery keys replaced with primary s Auto Shamir Receives primary recovery N A Seal config and keys replaced with primary s recovery seal config and keys Note Vault clusters configured with auto unseal vault docs concepts seal auto unseal have recovery keys instead of unseal keys Note Vault versions Vault changes are designed and tested to ensure that the upgrade instructions vault docs upgrading replication installations are viable i e that a secondary can run a newer Vault version than its primary That said we do not recommend running replicated Vault clusters with different versions any longer than necessary to perform the upgrade Internals Details on the internal design of the replication feature can be found in the replication internals vault docs internals replication document Security model Vault is trusted all over the world to keep secrets safe As such we have put extreme focus to detail to our replication model as well Primary Secondary communication When a cluster is marked as the primary it generates a self signed CA certificate On request and given a user specified identifier the primary uses this CA certificate to generate a private key and certificate and packages these along with some other information into a replication bootstrapping bundle a k a a secondary activation token The certificate is used to perform TLS mutual authentication between the primary and that secondary This CA certificate is never shared with secondaries and no secondary ever has access to any other secondary s certificate In practice this means that revoking a secondary s access to the primary does not allow it continue replication with any other machine it also means that if a primary goes down there is full administrative control over which cluster becomes primary An attacker cannot spoof a secondary into believing that a cluster the attacker controls is the new primary without also being able to administratively direct the secondary to connect by giving it a new bootstrap package which is an ACL protected call Vault makes use of Application Layer Protocol Negotiation on its cluster port This allows the same port to handle both request forwarding and replication even while keeping the certificate root of trust and feature set different Secondary activation tokens A secondary activation token is an extremely sensitive item and as such is protected via response wrapping Experienced Vault users will note that the wrapping format for replication bootstrap packages is different from normal response wrapping tokens it is a signed JWT This allows the replication token to carry the redirect address of the primary cluster as part of the token In most cases this means that simply providing the token to a new secondary is enough to activate replication although this can also be overridden when the token is provided to the secondary Secondary activation tokens should be treated like Vault root tokens If disclosed to a bad actor that actor can gain access to all Vault data It should therefore be treated with utmost sensitivity Like all response wrapping tokens once the token is used successfully in this case to activate a secondary it is useless so it is only necessary to safeguard it from one machine to the next Like with root tokens HashiCorp recommends that when a secondary activation token is live there are multiple eyes on it from generation until it is used Once a secondary is activated its cluster information is stored safely behind its encrypted barrier Mutual TLS and load balancers Vault generates its own certificates for cluster members All replication traffic uses the cluster port using these Vault generated certificates after initial bootstrapping Because of this the cluster traffic can NOT be terminated at the cluster port at a load balancer level Tutorial Refer to the following tutorials replication setup and best practices Set up Performance Replication vault tutorials enterprise performance replication Disaster Recovery Replication Setup vault tutorials enterprise disaster recovery Monitoring Vault Replication vault tutorials monitoring monitor replication API The Vault replication component has a full HTTP API Refer to the Vault Replication API vault api docs system replication for more details |
vault This requires the Enterprise ADP KM license The Vault PKCS 11 Provider allows Vault KMIP Secrets Engine to be used via PKCS 11 calls The provider supports a subset of key generation encryption decryption and key storage operations layout docs page title PKCS 11 Provider Vault Enterprise PKCS 11 provider | ---
layout: docs
page_title: PKCS#11 Provider - Vault Enterprise
description: |-
The Vault PKCS#11 Provider allows Vault KMIP Secrets Engine to be used via PKCS#11 calls.
The provider supports a subset of key generation, encryption, decryption and key storage operations.
This requires the Enterprise ADP-KM license.
---
# PKCS#11 provider
@include 'alerts/enterprise-only.mdx'
PKCS11 provider is part of the [KMIP Secret Engine](/vault/docs/secrets/kmip), which requires [Vault Enterprise](https://www.hashicorp.com/products/vault/pricing)
with the Advanced Data Protection (ADP) module.
[PKCS#11](http://docs.oasis-open.org/pkcs11/pkcs11-base/v2.40/os/pkcs11-base-v2.40-os.html)
is an open standard C API that provides a means to access cryptographic capabilities on a device.
For example, it is often used to access a Hardware Security Module (HSM) (like a [Yubikey](https://www.yubico.com/)) from a local program (such as [GPG](https://gnupg.org/)).
Vault provides a PKCS#11 library (or provider) so that Vault can be used as an SSM (Software Security Module).
This allows a user to treat Vault like any other PKCS#11 device to manage keys, objects, and perform encryption and decryption in Vault using PKCS#11 calls.
The PKCS#11 library connects to Vault's [KMIP Secrets Engine](/vault/docs/secrets/kmip) to provide cryptographic operations and object storage.
## Platform support
This library works with Vault Enterprise 1.11+ with the advanced data protection module in the license
with the KMIP Secrets Engine.
| Operating System | Architecture | Distribution | glibc |
| ---------------- | ------------ | ----------------- | ------- |
| Linux | x86-64 | RHEL 7 compatible | 2.17 |
| Linux | x86-64 | RHEL 8 compatible | 2.28 |
| Linux | x86-64 | RHEL 9 compatible | 2.34 |
| macOS | x86-64 | — | — |
| macOS | arm64 | — | — |
_Note:_ `vault-pkcs11-provider` runs on _any_ glibc-based Linux distribution. The versions above are given in RHEL-compatible GLIBC versions; for your
distro's glibc version, choose the `vault-pkcs11-provider` built against the same or older version as what your distro provides.
The provider comes in the form of a shared C library, `libvault-pkcs11.so` (for Linux) or `libvault-pkcs11.dylib` (for macOS).
It can be downloaded from [releases.hashicorp.com](https://releases.hashicorp.com/vault-pkcs11-provider).
## Quick start
1. To use the provider, you will need access to a Vault Enterprise instance with the KMIP Secrets Engine.
For example, you can start one locally (if you have a license in the `VAULT_LICENSE` environment variable) with:
```sh
docker pull hashicorp/vault-enterprise &&
docker run --name vault \
-p 5696:5696 \
-p 8200:8200 \
--cap-add=IPC_LOCK \
-e VAULT_LICENSE=$(printenv VAULT_LICENSE) \
-e VAULT_ADDR=http://127.0.0.1:8200 \
-e VAULT_TOKEN=root \
hashicorp/vault-enterprise \
server -dev -dev-root-token-id root -dev-listen-address 0.0.0.0:8200
```
1. Configure the [KMIP Secrets Engine](/vault/docs/secrets/kmip) and a KMIP *scope*. The scope is used to hold keys and objects.
~> **Note**: These commands will output the credentials in plaintext.
```sh
vault secrets enable kmip
vault write kmip/config listen_addrs=0.0.0.0:5696
vault write -f kmip/scope/my-service
vault write kmip/scope/my-service/role/admin operation_all=true
vault write -f -format=json kmip/scope/my-service/role/admin/credential/generate | tee kmip.json
```
~> **Important**: When configuring KMIP in production, you will probably need to set the
`server_hostnames` and `server_ips` [configuration parameters](/vault/api-docs/secret/kmip#parameters),
otherwise the TLS connection to the KMIP Secrets Engine will fail due to certification validation errors.
This last line will generate a JSON file with the certificate, key, and CA certificate chain to connect
to the KMIP server. You'll need to save these to files so that the PKCS#11 provider can use them.
```sh
jq --raw-output --exit-status '.data.ca_chain[]' kmip.json > ca.pem
jq --raw-output --exit-status '.data.certificate' kmip.json > cert.pem
```
The certificate file from the KMIP Secrets Engine also contains the key.
1. Create a configuration file called `vault-pkcs11.hcl`:
```hcl
slot {
server = "127.0.0.1:5696"
tls_cert_path = "cert.pem"
ca_path = "ca.pem"
scope = "my-service"
}
```
See [below](#configuration) for all available parameters.
1. Copy the certificates from the KMIP credentials into the files specified in the configuration file (e.g., `cert.pem`, and `ca.pem`).
1. You should now be able to use the `libvault-pkcs11.so` (or `.dylib`) library to access the KMIP Secrets Engine in Vault using any PKCS#11-compatible tool, like OpenSC's `pkcs11-tool`, e.g.:
```sh
$ VAULT_LOG_FILE=/dev/null pkcs11-tool --module ./libvault-pkcs11.so -L
Available slots:
Slot 0 (0x0): Vault slot 0
token label : Token 0
token manufacturer : HashiCorp
token model : Vault Enterprise
token flags : token initialized, PIN initialized, other flags=0x60
hardware version : 1.12
firmware version : 1.12
serial num : 1234
pin min/max : 0/255
$ VAULT_LOG_FILE=/dev/null pkcs11-tool --module ./libvault-pkcs11.so --keygen -a abc123 --key-type AES:32 \
--extractable --allow-sw 2>/dev/null
Key generated:
Secret Key Object; AES length 32
VALUE:
label: abc123
Usage: encrypt, decrypt, wrap, unwrap
Access: none
```
The `VAULT_LOG_FILE=/dev/null` setting is to prevent the Vault PKCS#11 driver logs from appearing in stdout (the default if no file is specified).
In production, it's good to set `VAULT_LOG_FILE` to point to somewhere more permanent, like `/var/log/vault.log`.
## Configuration
The PKCS#11 Provider can be configured through an HCL file and through envionment variables.
The HCL file contains directives to map PKCS#11 device
[slots](http://docs.oasis-open.org/pkcs11/pkcs11-base/v2.40/os/pkcs11-base-v2.40-os.html#_Toc416959678) (logical devices)
to Vault instances and KMIP scopes and configures how the library will authenticate to KMIP (with a client TLS certificate).
The PKCS#11 library will look for this file in `vault-pkcs11.hcl` and `/etc/vault-pkcs11.hcl` by default, or you can override this by setting the `VAULT_KMIP_CONFIG` environment variable.
For example,
```hcl
slot {
server = "127.0.0.1:5696"
tls_cert_path = "cert.pem"
ca_path = "ca.pem"
scope = "my-service"
}
```
The `slot` block configures the first PKCS#11 slot to point to Vault.
Most programs will use only one slot.
- `server` (required): the Vault server's IP or DNS name and port number (5696 is the default).
- `tls_cert_path` (required): the location of the client TLS certificate used to authenticate to the KMIP engine.
- `tls_key_path` (optional, defaults to the value of `tls_cert_path`): the location of the encrypted or unencrypted TLS key used to authenticate to the KMIP engine.
- `ca_path` (required): the location of the CA bundle that will be used to verify the server's certificate.
- `scope` (required): the [KMIP scope](/vault/docs/secrets/kmip#scopes-and-roles) to authenticate against and where the TDE master keys and associated metadata will be stored.
- `cache` (optional, default `true`): if the provider uses a cache to improve the performance of `C_GetAttributeValue` (KMIP: `GetAttributes`) calls.
- `emulate_hardware` (optional, default `false`): specifies if the provider should report that it is connected to a hardware device.
The default location the PKCS#11 library will look for the configuration file is the current directory (`vault-pkcs11.hcl`) and `/etc/vault-pkcs11.hcl`, but you can override this by setting the `VAULT_KMIP_CONFIG` environment variable to any file.
Environment variables can be also used to configure these parameters and more.
- `VAULT_KMIP_CONFIG`: location of the HCL configuration file. By default, the provider will check `./vault-pkcs11.hcl` and `/etc/vault-pkcs11.hcl`.
- `VAULT_KMIP_CERT_FILE`: location of the TLS certificate used for authentication to the KMIP engine.
- `VAULT_KMIP_KEY_FILE`: location of the TLS key used for authentication to the KMIP engine.
- `VAULT_KMIP_KEY_PASSWORD`: password for the TLS key file, if it is encrypted to the KMIP engine.
- `VAULT_KMIP_CA_FILE`: location of the TLS CA bundle used to authenticate the connection to the KMIP engine.
- `VAULT_KMIP_SERVER`: address and port of the KMIP engine to use for encryption and storage.
- `VAULT_KMIP_SCOPE`: KMIP scope to use for encryption and storage
- `VAULT_KMIP_CACHE`: whether or not to cache `C_GetAttributeValue` (KMIP: `GetAttributes`) calls.
- `VAULT_LOG_LEVEL`: the log level that the provider will use. Defaults to `WARN`. Valid values include `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`, and `OFF`.
- `VAULT_LOG_FILE`: the location of the file the provider will use for logging. Defaults to standard out.
- `VAULT_EMULATE_HARDWARE`: whether or not the provider will report that it is backed by a hardware device.
## Encrypted TLS key support
The TLS key returned by the KMIP engine is unencrypted by default.
However, the PKCS#11 provider does support (limited) encryption options for the key using [RFC 1423](https://www.rfc-editor.org/rfc/rfc1423).
We would only recommend using AES-256-CBC out of the available algorithms.
The keys from KMIP should be ECDSA keys, and can be encrypted with a password with OpenSSL, e.g.,:
```sh
openssl ec -in cert.key -out encrypted.key -aes-256-cbc
```
The PKCS#11 provider will need access to the password to decrypt the TLS key.
The password can be supplied to the provider in two ways:
- The `VAULT_KMIP_KEY_PASSWORD` environment variable, or
- the "PIN" parameter to the `C_Login` PKCS#11 function will be used to try to decrypt an encrypted TLS key.
Note that only a single password can be supplied via the `VAULT_KMIP_KEY_PASSWORD`, so if multiple slots in the HCL file use encrypted TLS keys, they will need to be encrypted with the same password, or use the `C_Login` method to specify the password.
## Error handling
If an error occurs, the first place to check will be the `VAULT_LOG_FILE` for any relevant error messages.
If the PKCS#11 provider returns an error code of `0x30` (`CKR_DEVICE_ERROR`), then an additional device error code may
be available from the `C_SessionInfo` call.
Here are the known device error codes the provider will return:
| Code | Meaning |
| ---- | ---------------------------------------------------------------- |
| 400 | Invalid input was provided in the configuration or PKCS#11 call. |
| 401 | Invalid credentials were provided. |
| 404 | The object, attribute, or key was not found. |
| 600 | An unknown I/O error occurred. |
| 601 | A KMIP engine error occured. |
## Capabilities
The Vault PKCS#11 provider implements the following PKCS#11 provider profiles:
- [Baseline](http://docs.oasis-open.org/pkcs11/pkcs11-profiles/v2.40/os/pkcs11-profiles-v2.40-os.html#_Toc416960548)
- [Extended](http://docs.oasis-open.org/pkcs11/pkcs11-profiles/v2.40/os/pkcs11-profiles-v2.40-os.html#_Toc416960554)
The following key genration mechanisms are currently supported:
| Name | Mechanism Number |Provider Version|Vault Version|
| ------------------ | ---------------- |----------------|-------------|
| RSA-PKCS | `0x0000` | 0.2.0 | 1.13 |
| AES key generation | `0x1080` | 0.1.0 | 1.12 |
The following encryption mechanisms are currently supported:
| Name | Mechanism Number |Provider Version|Vault Version|
| ------------------ | ---------------- |----------------|-------------|
| RSA-PKCS | `0x0001` | 0.2.0 | 1.13 |
| RSA-PKCS-OAEP | `0x0009` | 0.2.0 | 1.13 |
| AES-ECB | `0x1081` | 0.2.0 | 1.13 |
| AES-CBC | `0x1082` | 0.1.0 | 1.12 |
| AES-CBC Pad | `0x1085` | 0.1.0 | 1.12 |
| AES-CTR | `0x1086` | 0.1.0 | 1.12 |
| AES-GCM | `0x1087` | 0.1.0 | 1.12 |
| AES-OFB | `0x2104` | 0.2.0 | 1.13 |
| AES-CFB128 | `0x2107` | 0.2.0 | 1.13 |
The following signing mechanisms are currently supported:
| Name | Mechanism Number |Provider Version|Vault Version|
| ------------------ | ---------------- |----------------|-------------|
| RSA-PKCS | `0x0001` | 0.2.0 | 1.13 |
| SHA256-RSA-PKCS | `0x0040` | 0.2.0 | 1.13 |
| SHA384-RSA-PKCS | `0x0041` | 0.2.0 | 1.13 |
| SHA512-RSA-PKCS | `0x0042` | 0.2.0 | 1.13 |
| SHA224-RSA-PKCS | `0x0046` | 0.2.0 | 1.13 |
| SHA512-224-HMAC | `0x0049` | 0.2.0 | 1.13 |
| SHA512-256-HMAC | `0x004D` | 0.2.0 | 1.13 |
| SHA256-HMAC | `0x0251` | 0.2.0 | 1.13 |
| SHA224-HMAC | `0x0256` | 0.2.0 | 1.13 |
| SHA384-HMAC | `0x0261` | 0.2.0 | 1.13 |
| SHA512-HMAC | `0x0271` | 0.2.0 | 1.13 |
<Tabs>
<Tab heading="Supported PKCS#11 Functions (version 0.2)">
Here is the list of supported and unsupported PKCS#11 functions:
- Encryption and decryption
- [X] `C_EncryptInit`
- [X] `C_Encrypt`
- [X] `C_EncryptUpdate`
- [X] `C_EncryptFinal`
- [X] `C_DecryptInit`
- [X] `C_Decrypt`
- [X] `C_DecryptUpdate`
- [X] `C_DecryptFinal`
- Key management
- [X] `C_GenerateKey`
- [X] `C_GenerateKeyPair`
- [ ] `C_WrapKey`
- [ ] `C_UnwrapKey`
- [ ] `C_DeriveKey`
- Objects
- [X] `C_CreateObject`
- [X] `C_DestroyObject`
- [X] `C_GetAttributeValue`
- [X] `C_FindObjectsInit`
- [X] `C_FindObjects`
- [X] `C_FindObjectsFinal`
- [ ] `C_SetAttributeValue`
- [ ] `C_CopyObject`
- [ ] `C_GetObjectSize`
- Management
- [X] `C_Initialize`
- [X] `C_Finalize`
- [X] `C_Login` (PIN is used as a passphrase for the TLS encryption key, if provided)
- [X] `C_Logout`
- [X] `C_GetInfo`
- [X] `C_GetSlotList`
- [X] `C_GetSlotInfo`
- [X] `C_GetTokenInfo`
- [X] `C_GetMechanismList`
- [X] `C_GetMechanismInfo`
- [X] `C_OpenSession`
- [X] `C_CloseSession`
- [X] `C_CloseAllSessions`
- [X] `C_GetSessionInfo`
- [ ] `C_InitToken`
- [ ] `C_InitPIN`
- [ ] `C_SetPIN`
- [ ] `C_GetOperationState`
- [ ] `C_SetOperationState`
- [ ] `C_GetFunctionStatus`
- [ ] `C_CancelFunction`
- [ ] `C_WaitForSlotEvent`
- Signing
- [X] `C_SignInit`
- [X] `C_Sign`
- [X] `C_SignUpdate`
- [X] `C_SignFinal`
- [ ] `C_SignRecoverInit`
- [ ] `C_SignRecover`
- [X] `C_VerifyInit`
- [X] `C_Verify`
- [X] `C_VerifyUpdate`
- [X] `C_VerifyFinal`
- [ ] `C_VerifyRecoverInit`
- [ ] `C_VerifyRecover`
- Digests
- [ ] `C_DigestInit`
- [ ] `C_Digest`
- [ ] `C_DigestUpdate`
- [ ] `C_DigestKey`
- [ ] `C_DigestFinal`
- [ ] `C_DigestEncryptUpdate`
- [ ] `C_DecryptDigestUpdate`
- [ ] `C_SignEncryptUpdate`
- [ ] `C_DecryptVerifyUpdate`
- Random Number Generation (see note below)
- [X] `C_SeedRandom`
- [X] `C_GenerateRandom`
</Tab>
<Tab heading="Supported PKCS#11 Functions (version 0.1)">
Here is the list of supported and unsupported PKCS#11 functions:
- Encryption and decryption
- [X] `C_EncryptInit`
- [X] `C_Encrypt`
- [ ] `C_EncryptUpdate`
- [ ] `C_EncryptFinal`
- [X] `C_DecryptInit`
- [X] `C_Decrypt`
- [ ] `C_DecryptUpdate`
- [ ] `C_DecryptFinal`
- Key management
- [X] `C_GenerateKey`
- [ ] `C_GenerateKeyPair`
- [ ] `C_WrapKey`
- [ ] `C_UnwrapKey`
- [ ] `C_DeriveKey`
- Objects
- [X] `C_CreateObject`
- [X] `C_DestroyObject`
- [X] `C_GetAttributeValue`
- [X] `C_FindObjectsInit`
- [X] `C_FindObjects`
- [X] `C_FindObjectsFinal`
- [ ] `C_SetAttributeValue`
- [ ] `C_CopyObject`
- [ ] `C_GetObjectSize`
- Management
- [X] `C_Initialize`
- [X] `C_Finalize`
- [X] `C_Login` (PIN is used as a passphrase for the TLS encryption key, if provided)
- [X] `C_Logout`
- [X] `C_GetInfo`
- [X] `C_GetSlotList`
- [X] `C_GetSlotInfo`
- [X] `C_GetTokenInfo`
- [X] `C_GetMechanismList`
- [X] `C_GetMechanismInfo`
- [X] `C_OpenSession`
- [X] `C_CloseSession`
- [X] `C_CloseAllSessions`
- [X] `C_GetSessionInfo`
- [ ] `C_InitToken`
- [ ] `C_InitPIN`
- [ ] `C_SetPIN`
- [ ] `C_GetOperationState`
- [ ] `C_SetOperationState`
- [ ] `C_GetFunctionStatus`
- [ ] `C_CancelFunction`
- [ ] `C_WaitForSlotEvent`
- Signing
- [ ] `C_SignInit`
- [ ] `C_Sign`
- [ ] `C_SignUpdate`
- [ ] `C_SignFinal`
- [ ] `C_SignRecoverInit`
- [ ] `C_SignRecover`
- [ ] `C_VerifyInit`
- [ ] `C_Verify`
- [ ] `C_VerifyUpdate`
- [ ] `C_VerifyFinal`
- [ ] `C_VerifyRecoverInit`
- [ ] `C_VerifyRecover`
- Digests
- [ ] `C_DigestInit`
- [ ] `C_Digest`
- [ ] `C_DigestUpdate`
- [ ] `C_DigestKey`
- [ ] `C_DigestFinal`
- [ ] `C_DigestEncryptUpdate`
- [ ] `C_DecryptDigestUpdate`
- [ ] `C_SignEncryptUpdate`
- [ ] `C_DecryptVerifyUpdate`
- Random Number Generation (see note below)
- [X] `C_SeedRandom`
- [X] `C_GenerateRandom`
</Tab>
</Tabs>
## Limitations and notes
Due to the nature of Vault, the KMIP Secrets Engine, and PKCS#11, there are some other limitations to be aware of:
- The key and object IDs returned by `C_FindObjects`, etc., are randomized for each session, and cannot be shared between sessions; they have no meaning after a session is closed.
This is because KMIP objects, which are used to store the PKCS#11 objects, have long random strings as IDs, but the PKCS#11 object ID is limited to a 32-bit integer. Also, the PKCS#11 provider does not have any local storage.
- The PKCS#11 provider's performance is heavily dependent on the latency to the Vault server and its performance.
This is because nearly all PKCS#11 API calls are translated 1-1 to KMIP calls, aside from some object attribute calls (which can be locally cached).
Multiple sessions can be safely used simultaneously though, and a single Vault server node has been tested as supporting thousands of ongoing sessions.
- The object attribute cache is valid only for a single object per session, and will be cleared when another object's attributes are queried.
- The random number generator function, `C_GenerateRandom`, is currently implemented in software in the library by calling out to Go's [`crypto/rand`](https://pkg.go.dev/crypto/rand) package,
and does **not** call Vault.
## Changelog
### v0.2.1
* Go update to 1.22.7 and Go dependency updates
* Add license files to artifacts
### v0.2.0
* Introduced support for RSA and HMAC operations
### v0.1.3
* Go update to 1.19.4 and Go dependency updates
* Added missing checksum for EL9 builds
### v0.1.2
* Added arm64 support on macOS
* Go update to 1.19.2 and Go dependency updates
### v0.1.1
* KMIP: Set activation date attribute required by Vault 1.12
* KMIP: Revoke a key prior to destroy
### v0.1.0
* Initial release | vault | layout docs page title PKCS 11 Provider Vault Enterprise description The Vault PKCS 11 Provider allows Vault KMIP Secrets Engine to be used via PKCS 11 calls The provider supports a subset of key generation encryption decryption and key storage operations This requires the Enterprise ADP KM license PKCS 11 provider include alerts enterprise only mdx PKCS11 provider is part of the KMIP Secret Engine vault docs secrets kmip which requires Vault Enterprise https www hashicorp com products vault pricing with the Advanced Data Protection ADP module PKCS 11 http docs oasis open org pkcs11 pkcs11 base v2 40 os pkcs11 base v2 40 os html is an open standard C API that provides a means to access cryptographic capabilities on a device For example it is often used to access a Hardware Security Module HSM like a Yubikey https www yubico com from a local program such as GPG https gnupg org Vault provides a PKCS 11 library or provider so that Vault can be used as an SSM Software Security Module This allows a user to treat Vault like any other PKCS 11 device to manage keys objects and perform encryption and decryption in Vault using PKCS 11 calls The PKCS 11 library connects to Vault s KMIP Secrets Engine vault docs secrets kmip to provide cryptographic operations and object storage Platform support This library works with Vault Enterprise 1 11 with the advanced data protection module in the license with the KMIP Secrets Engine Operating System Architecture Distribution glibc Linux x86 64 RHEL 7 compatible 2 17 Linux x86 64 RHEL 8 compatible 2 28 Linux x86 64 RHEL 9 compatible 2 34 macOS x86 64 mdash mdash macOS arm64 mdash mdash Note vault pkcs11 provider runs on any glibc based Linux distribution The versions above are given in RHEL compatible GLIBC versions for your distro s glibc version choose the vault pkcs11 provider built against the same or older version as what your distro provides The provider comes in the form of a shared C library libvault pkcs11 so for Linux or libvault pkcs11 dylib for macOS It can be downloaded from releases hashicorp com https releases hashicorp com vault pkcs11 provider Quick start 1 To use the provider you will need access to a Vault Enterprise instance with the KMIP Secrets Engine For example you can start one locally if you have a license in the VAULT LICENSE environment variable with sh docker pull hashicorp vault enterprise docker run name vault p 5696 5696 p 8200 8200 cap add IPC LOCK e VAULT LICENSE printenv VAULT LICENSE e VAULT ADDR http 127 0 0 1 8200 e VAULT TOKEN root hashicorp vault enterprise server dev dev root token id root dev listen address 0 0 0 0 8200 1 Configure the KMIP Secrets Engine vault docs secrets kmip and a KMIP scope The scope is used to hold keys and objects Note These commands will output the credentials in plaintext sh vault secrets enable kmip vault write kmip config listen addrs 0 0 0 0 5696 vault write f kmip scope my service vault write kmip scope my service role admin operation all true vault write f format json kmip scope my service role admin credential generate tee kmip json Important When configuring KMIP in production you will probably need to set the server hostnames and server ips configuration parameters vault api docs secret kmip parameters otherwise the TLS connection to the KMIP Secrets Engine will fail due to certification validation errors This last line will generate a JSON file with the certificate key and CA certificate chain to connect to the KMIP server You ll need to save these to files so that the PKCS 11 provider can use them sh jq raw output exit status data ca chain kmip json ca pem jq raw output exit status data certificate kmip json cert pem The certificate file from the KMIP Secrets Engine also contains the key 1 Create a configuration file called vault pkcs11 hcl hcl slot server 127 0 0 1 5696 tls cert path cert pem ca path ca pem scope my service See below configuration for all available parameters 1 Copy the certificates from the KMIP credentials into the files specified in the configuration file e g cert pem and ca pem 1 You should now be able to use the libvault pkcs11 so or dylib library to access the KMIP Secrets Engine in Vault using any PKCS 11 compatible tool like OpenSC s pkcs11 tool e g sh VAULT LOG FILE dev null pkcs11 tool module libvault pkcs11 so L Available slots Slot 0 0x0 Vault slot 0 token label Token 0 token manufacturer HashiCorp token model Vault Enterprise token flags token initialized PIN initialized other flags 0x60 hardware version 1 12 firmware version 1 12 serial num 1234 pin min max 0 255 VAULT LOG FILE dev null pkcs11 tool module libvault pkcs11 so keygen a abc123 key type AES 32 extractable allow sw 2 dev null Key generated Secret Key Object AES length 32 VALUE label abc123 Usage encrypt decrypt wrap unwrap Access none The VAULT LOG FILE dev null setting is to prevent the Vault PKCS 11 driver logs from appearing in stdout the default if no file is specified In production it s good to set VAULT LOG FILE to point to somewhere more permanent like var log vault log Configuration The PKCS 11 Provider can be configured through an HCL file and through envionment variables The HCL file contains directives to map PKCS 11 device slots http docs oasis open org pkcs11 pkcs11 base v2 40 os pkcs11 base v2 40 os html Toc416959678 logical devices to Vault instances and KMIP scopes and configures how the library will authenticate to KMIP with a client TLS certificate The PKCS 11 library will look for this file in vault pkcs11 hcl and etc vault pkcs11 hcl by default or you can override this by setting the VAULT KMIP CONFIG environment variable For example hcl slot server 127 0 0 1 5696 tls cert path cert pem ca path ca pem scope my service The slot block configures the first PKCS 11 slot to point to Vault Most programs will use only one slot server required the Vault server s IP or DNS name and port number 5696 is the default tls cert path required the location of the client TLS certificate used to authenticate to the KMIP engine tls key path optional defaults to the value of tls cert path the location of the encrypted or unencrypted TLS key used to authenticate to the KMIP engine ca path required the location of the CA bundle that will be used to verify the server s certificate scope required the KMIP scope vault docs secrets kmip scopes and roles to authenticate against and where the TDE master keys and associated metadata will be stored cache optional default true if the provider uses a cache to improve the performance of C GetAttributeValue KMIP GetAttributes calls emulate hardware optional default false specifies if the provider should report that it is connected to a hardware device The default location the PKCS 11 library will look for the configuration file is the current directory vault pkcs11 hcl and etc vault pkcs11 hcl but you can override this by setting the VAULT KMIP CONFIG environment variable to any file Environment variables can be also used to configure these parameters and more VAULT KMIP CONFIG location of the HCL configuration file By default the provider will check vault pkcs11 hcl and etc vault pkcs11 hcl VAULT KMIP CERT FILE location of the TLS certificate used for authentication to the KMIP engine VAULT KMIP KEY FILE location of the TLS key used for authentication to the KMIP engine VAULT KMIP KEY PASSWORD password for the TLS key file if it is encrypted to the KMIP engine VAULT KMIP CA FILE location of the TLS CA bundle used to authenticate the connection to the KMIP engine VAULT KMIP SERVER address and port of the KMIP engine to use for encryption and storage VAULT KMIP SCOPE KMIP scope to use for encryption and storage VAULT KMIP CACHE whether or not to cache C GetAttributeValue KMIP GetAttributes calls VAULT LOG LEVEL the log level that the provider will use Defaults to WARN Valid values include TRACE DEBUG INFO WARN ERROR and OFF VAULT LOG FILE the location of the file the provider will use for logging Defaults to standard out VAULT EMULATE HARDWARE whether or not the provider will report that it is backed by a hardware device Encrypted TLS key support The TLS key returned by the KMIP engine is unencrypted by default However the PKCS 11 provider does support limited encryption options for the key using RFC 1423 https www rfc editor org rfc rfc1423 We would only recommend using AES 256 CBC out of the available algorithms The keys from KMIP should be ECDSA keys and can be encrypted with a password with OpenSSL e g sh openssl ec in cert key out encrypted key aes 256 cbc The PKCS 11 provider will need access to the password to decrypt the TLS key The password can be supplied to the provider in two ways The VAULT KMIP KEY PASSWORD environment variable or the PIN parameter to the C Login PKCS 11 function will be used to try to decrypt an encrypted TLS key Note that only a single password can be supplied via the VAULT KMIP KEY PASSWORD so if multiple slots in the HCL file use encrypted TLS keys they will need to be encrypted with the same password or use the C Login method to specify the password Error handling If an error occurs the first place to check will be the VAULT LOG FILE for any relevant error messages If the PKCS 11 provider returns an error code of 0x30 CKR DEVICE ERROR then an additional device error code may be available from the C SessionInfo call Here are the known device error codes the provider will return Code Meaning 400 Invalid input was provided in the configuration or PKCS 11 call 401 Invalid credentials were provided 404 The object attribute or key was not found 600 An unknown I O error occurred 601 A KMIP engine error occured Capabilities The Vault PKCS 11 provider implements the following PKCS 11 provider profiles Baseline http docs oasis open org pkcs11 pkcs11 profiles v2 40 os pkcs11 profiles v2 40 os html Toc416960548 Extended http docs oasis open org pkcs11 pkcs11 profiles v2 40 os pkcs11 profiles v2 40 os html Toc416960554 The following key genration mechanisms are currently supported Name Mechanism Number Provider Version Vault Version RSA PKCS 0x0000 0 2 0 1 13 AES key generation 0x1080 0 1 0 1 12 The following encryption mechanisms are currently supported Name Mechanism Number Provider Version Vault Version RSA PKCS 0x0001 0 2 0 1 13 RSA PKCS OAEP 0x0009 0 2 0 1 13 AES ECB 0x1081 0 2 0 1 13 AES CBC 0x1082 0 1 0 1 12 AES CBC Pad 0x1085 0 1 0 1 12 AES CTR 0x1086 0 1 0 1 12 AES GCM 0x1087 0 1 0 1 12 AES OFB 0x2104 0 2 0 1 13 AES CFB128 0x2107 0 2 0 1 13 The following signing mechanisms are currently supported Name Mechanism Number Provider Version Vault Version RSA PKCS 0x0001 0 2 0 1 13 SHA256 RSA PKCS 0x0040 0 2 0 1 13 SHA384 RSA PKCS 0x0041 0 2 0 1 13 SHA512 RSA PKCS 0x0042 0 2 0 1 13 SHA224 RSA PKCS 0x0046 0 2 0 1 13 SHA512 224 HMAC 0x0049 0 2 0 1 13 SHA512 256 HMAC 0x004D 0 2 0 1 13 SHA256 HMAC 0x0251 0 2 0 1 13 SHA224 HMAC 0x0256 0 2 0 1 13 SHA384 HMAC 0x0261 0 2 0 1 13 SHA512 HMAC 0x0271 0 2 0 1 13 Tabs Tab heading Supported PKCS 11 Functions version 0 2 Here is the list of supported and unsupported PKCS 11 functions Encryption and decryption X C EncryptInit X C Encrypt X C EncryptUpdate X C EncryptFinal X C DecryptInit X C Decrypt X C DecryptUpdate X C DecryptFinal Key management X C GenerateKey X C GenerateKeyPair C WrapKey C UnwrapKey C DeriveKey Objects X C CreateObject X C DestroyObject X C GetAttributeValue X C FindObjectsInit X C FindObjects X C FindObjectsFinal C SetAttributeValue C CopyObject C GetObjectSize Management X C Initialize X C Finalize X C Login PIN is used as a passphrase for the TLS encryption key if provided X C Logout X C GetInfo X C GetSlotList X C GetSlotInfo X C GetTokenInfo X C GetMechanismList X C GetMechanismInfo X C OpenSession X C CloseSession X C CloseAllSessions X C GetSessionInfo C InitToken C InitPIN C SetPIN C GetOperationState C SetOperationState C GetFunctionStatus C CancelFunction C WaitForSlotEvent Signing X C SignInit X C Sign X C SignUpdate X C SignFinal C SignRecoverInit C SignRecover X C VerifyInit X C Verify X C VerifyUpdate X C VerifyFinal C VerifyRecoverInit C VerifyRecover Digests C DigestInit C Digest C DigestUpdate C DigestKey C DigestFinal C DigestEncryptUpdate C DecryptDigestUpdate C SignEncryptUpdate C DecryptVerifyUpdate Random Number Generation see note below X C SeedRandom X C GenerateRandom Tab Tab heading Supported PKCS 11 Functions version 0 1 Here is the list of supported and unsupported PKCS 11 functions Encryption and decryption X C EncryptInit X C Encrypt C EncryptUpdate C EncryptFinal X C DecryptInit X C Decrypt C DecryptUpdate C DecryptFinal Key management X C GenerateKey C GenerateKeyPair C WrapKey C UnwrapKey C DeriveKey Objects X C CreateObject X C DestroyObject X C GetAttributeValue X C FindObjectsInit X C FindObjects X C FindObjectsFinal C SetAttributeValue C CopyObject C GetObjectSize Management X C Initialize X C Finalize X C Login PIN is used as a passphrase for the TLS encryption key if provided X C Logout X C GetInfo X C GetSlotList X C GetSlotInfo X C GetTokenInfo X C GetMechanismList X C GetMechanismInfo X C OpenSession X C CloseSession X C CloseAllSessions X C GetSessionInfo C InitToken C InitPIN C SetPIN C GetOperationState C SetOperationState C GetFunctionStatus C CancelFunction C WaitForSlotEvent Signing C SignInit C Sign C SignUpdate C SignFinal C SignRecoverInit C SignRecover C VerifyInit C Verify C VerifyUpdate C VerifyFinal C VerifyRecoverInit C VerifyRecover Digests C DigestInit C Digest C DigestUpdate C DigestKey C DigestFinal C DigestEncryptUpdate C DecryptDigestUpdate C SignEncryptUpdate C DecryptVerifyUpdate Random Number Generation see note below X C SeedRandom X C GenerateRandom Tab Tabs Limitations and notes Due to the nature of Vault the KMIP Secrets Engine and PKCS 11 there are some other limitations to be aware of The key and object IDs returned by C FindObjects etc are randomized for each session and cannot be shared between sessions they have no meaning after a session is closed This is because KMIP objects which are used to store the PKCS 11 objects have long random strings as IDs but the PKCS 11 object ID is limited to a 32 bit integer Also the PKCS 11 provider does not have any local storage The PKCS 11 provider s performance is heavily dependent on the latency to the Vault server and its performance This is because nearly all PKCS 11 API calls are translated 1 1 to KMIP calls aside from some object attribute calls which can be locally cached Multiple sessions can be safely used simultaneously though and a single Vault server node has been tested as supporting thousands of ongoing sessions The object attribute cache is valid only for a single object per session and will be cleared when another object s attributes are queried The random number generator function C GenerateRandom is currently implemented in software in the library by calling out to Go s crypto rand https pkg go dev crypto rand package and does not call Vault Changelog v0 2 1 Go update to 1 22 7 and Go dependency updates Add license files to artifacts v0 2 0 Introduced support for RSA and HMAC operations v0 1 3 Go update to 1 19 4 and Go dependency updates Added missing checksum for EL9 builds v0 1 2 Added arm64 support on macOS Go update to 1 19 2 and Go dependency updates v0 1 1 KMIP Set activation date attribute required by Vault 1 12 KMIP Revoke a key prior to destroy v0 1 0 Initial release |
vault include alerts enterprise only mdx layout docs page title AWS KMS External Key Store XKS PKCS 11 Provider Vault Enterprise Vault with AWS KMS external key store XKS via PKCS 11 and XKS proxy AWS KMS External Key Store can use Vault as a key store via the Vault PKCS 11 Provider | ---
layout: docs
page_title: AWS KMS External Key Store (XKS) - PKCS#11 Provider - Vault Enterprise
description: |-
AWS KMS External Key Store can use Vault as a key store via the Vault PKCS#11 Provider.
---
# Vault with AWS KMS external key store (XKS) via PKCS#11 and XKS proxy
@include 'alerts/enterprise-only.mdx'
~> **Note**: AWS [`xks-proxy`](https://github.com/aws-samples/aws-kms-xks-proxy) is used in this document as a sample implementation.
Vault's KMIP Secrets Engine can be used as an external key store for the AWS KMS [External Key Store (XKS)](https://aws.amazon.com/blogs/aws/announcing-aws-kms-external-key-store-xks/) protocol using the AWS [`xks-proxy`](https://github.com/aws-samples/aws-kms-xks-proxy) along
with the [Vault PKCS#11 Provider](/vault/docs/enterprise/pkcs11-provider).
## Overview
This is tested as working with Vault 1.11.0 Enterprise (and later) with Advanced Data Protection (KMIP support).
Prerequisites:
* A server capable of running XKS Proxy on port 443, which is exposed to the Internet or a VPC endpoint. This can be the same as the Vault server.
* A valid DNS entry with a valid TLS certificate for XKS Proxy.
* `libvault-pkcs11.so` downloaded from [releases.hashicorp.com](https://releases.hashicorp.com/vault-pkcs11-provider) for your platform and available on the XKS Proxy server.
* Vault Enterprise with the KMIP Secrets Engine available and with TCP port 5696 accessible to where XKS Proxy will be running.
There are 3 parts to this setup:
1. Vault KMIP Secrets Engine standard setup. (There is nothing specific to XKS in this setup.)
1. Vault PKCS#11 setup to tell the PKCS#11 provider (`libvault-pkcs11.so`) how to talk to the Vault KMIP Secrets Engine. (There is nothing specific to XKS in this setup.)
1. XKS Proxy setup.
~> **Important**: XKS has a strict 250 ms latency requirement.
In order to serve requests with this latency, we recommend hosting Vault and the XKS proxy as close as possible
to the desired KMS region.
## Vault setup
On the Vault server, we need to [setup the KMIP Secrets Engine](/vault/docs/secrets/kmip):
1. Start the [KMIP Secrets Engine](/vault/docs/secrets/kmip) and listener:
```sh
vault secrets enable kmip
vault write kmip/config listen_addrs=0.0.0.0:5696
```
1. Create a KMIP scope to contain the AES keys that will be accessible.
The KMIP scope is essentially an isolated namespace.
Here is an example creating one called `my-service` (which will be used throughout this document).
```sh
vault write -f kmip/scope/my-service
```
1. Create a KMIP role that has access to the scope:
```sh
vault write kmip/scope/my-service/role/admin operation_all=true
```
1. Create TLS credentials (a certificate, key, and CA bundle) for the KMIP role:
~> **Note**: This command will output the credentials in plaintext.
```sh
vault write -f -format=json kmip/scope/my-service/role/admin/credential/generate | tee kmip.json
```
The response from the `credential/generate` endpoint is JSON.
The `.data.certificate` entry contains a bundle of the TLS client key and certificate we will use to connect to KMIP with from `xks-proxy`.
The `.data.ca_chain[]` entries contain the CA bundle to verify the KMIP server's certificate.
Save these to, e.g., `cert.pem` and `ca.pem`:
```sh
jq --raw-output --exit-status '.data.ca_chain[]' kmip.json > ca.pem
jq --raw-output --exit-status '.data.certificate' kmip.json > cert.pem
```
## XKS proxy setup
The rest of the steps take place on the XKS Proxy server.
For this example, We will use an HTTPS proxy service like [ngrok](https://ngrok.com/) to forward connections
to the XKS proxy. This helps to quickly setup a valid domain and TLS endpoint for testing.
1. Start `ngrok`:
```shell-session
$ ngrok http 8000
```
This will output a domain that can be used to configure KMS later, such as `https://example.ngrok.io`.
1. Copy the `libvault-pkcs11.so` binary to the server, such as `/usr/local/lib` (should be same as in the TOML config file below), and `chmod` it so that it is executable.
1. Copy the TLS certificate bundle (e.g., `/etc/kmip/cert.pem`) and CA bundle (e.g., `/etc/kmip/ca.pem`) to the `xks-proxy` server (doesn't matter where, as long as the `xks-proxy` process has access to it) from the Vault setup.
1. Create a `configuration/settings_vault.toml` file for the XKS to Vault PKCS#11 configuration,
and set the `XKS_PROXY_SETTINGS_TOML` environment variable to point to the file location.
The important settings to change:
* `[[external_key_stores]]`:
* change URI path prefix to anything you like
* choose random access ID
* choose random secret key
* set which key labels are accessible to XKS (`xks_key_id_set`)
* `[pkcs11]`: set the `PKCS11_HSM_MODULE` to the location of the `libvault-pkcs11.so` (or `.dylib`) file downloaded from [releases.hashicorp.com](https://releases.hashicorp.com/vault-pkcs11-provider).
```toml
[server]
ip = "0.0.0.0"
port = 8000
region = "us-east-2"
service = "kms-xks-proxy"
[server.tcp_keepalive]
tcp_keepalive_secs = 60
tcp_keepalive_retries = 3
tcp_keepalive_interval_secs = 1
[tracing]
is_stdout_writer_enabled = true
is_file_writer_enabled = true
level = "DEBUG"
directory = "/var/local/xks-proxy/logs"
file_prefix = "xks-proxy.log"
rotation_kind = "HOURLY"
[security]
is_sigv4_auth_enabled = true
is_tls_enabled = true
is_mtls_enabled = false
[tls]
tls_cert_pem = "tls/server_cert.pem"
tls_key_pem = "tls/server_key.pem"
mtls_client_ca_pem = "tls/client_ca.pem"
mtls_client_dns_name = "us-east-2.alpha.cks.kms.aws.internal.amazonaws.com"
[[external_key_stores]]
uri_path_prefix = "/xyz"
sigv4_access_key_id = "AKIA4GBY3I6JCE5M2HPM"
sigv4_secret_access_key = "1234567890123456789012345678901234567890123="
xks_key_id_set = ["abc123"]
[pkcs11]
session_pool_max_size = 30
session_pool_timeout_milli = 0
session_eager_close = false
user_pin = ""
PKCS11_HSM_MODULE = "/usr/local/lib/libvault-pkcs11.so"
context_read_timeout_milli = 100
[limits]
max_plaintext_in_base64 = 8192
max_aad_in_base64 = 16384
[hsm_capabilities]
can_generate_iv = false
is_zero_iv_required = false
```
~> **Note**: `vault-pkcs11-provider` versions of 0.1.0–0.1.2 require the last two lines to be changed to `can_generate_iv = true` and `is_zero_iv_required = true`.
1. Create a file, `/etc/vault-pkcs11.hcl` with the following contents:
```hcl
slot {
server = "VAULT_ADDRESS:5696"
tls_cert_path = "/etc/kmip/cert.pem"
ca_path = "/etc/kmip/ca.pem"
scope = "my-service"
}
```
This file is used by `libvault-pkcs11.so` to know how to find and communicate with the KMIP server.
See [the Vault docs](/vault/docs/enterprise/pkcs11-provider) for all available parameters and their usage.
1. If you want to view the Vault logs (helpful when trying to find error messages), you can specify the `VAULT_LOG_FILE` (default is stdout) and `VAULT_LOG_LEVEL` (default is `INFO`). We'd recommend setting `VAULT_LOG_FILE` to something like `/tmp/vault.log` or `/var/log/vault.log`. Other useful log levels are `WARN` (quieter) and `TRACE` (very verbose, could possibly contain sensitive information, like raw network packets).
1. Create an AES-256 key in KMIP, for example, using `pkcs11-tool` (usually installed with the OpenSC package). See the [Vault docs](/vault/docs/enterprise/pkcs11-provider) for the full setup.
```sh
VAULT_LOG_FILE=/dev/null pkcs11-tool --module ./libvault-pkcs11.so --keygen -a abc123 --key-type AES:32 \
--extractable --allow-sw
Key generated:
Secret Key Object; AES length 32
VALUE:
label: abc123
Usage: encrypt, decrypt, wrap, unwrap
Access: none
```
## Enable XKS in the AWS CLI
1. Create the KMS custom key store with the appropriate parameters to point to your XKS proxy (in this example, through `ngrok`).
```shell-session
$ aws kms create-custom-key-store \
--custom-key-store-name myVaultKeyStore \
--custom-key-store-type EXTERNAL_KEY_STORE \
--xks-proxy-uri-endpoint https://example.ngrok.io \
--xks-proxy-uri-path /xyz/kms/xks/v1 \
--xks-proxy-authentication-credential AccessKeyId=AKIA4GBY3I6JCE5M2HPM,RawSecretAccessKey=1234567890123456789012345678901234567890123= \
--xks-proxy-connectivity PUBLIC_ENDPOINT
{
"CustomKeyStoreId": "cks-d7a55fe93d63191d6"
}
```
1. Tell KMS to connect to the key store.
```shell-session
$ aws kms connect-custom-key-store --custom-key-store-id cks-d7a55fe93d63191d6
```
1. Wait for the `ConnectionState` of your custom key store to be `CONNECTED`. This can take a few minutes.
```shell-session
$ aws kms describe-custom-key-stores --custom-key-store-id cks-d7a55fe93d63191d6
```
1. Create a KMS key associated with the XKS key ID (`abc123` in this example):
```shell-session
$ aws kms create-key --custom-key-store-id cks-d7a55fe93d63191d6 \
--xks-key-id abc123 --origin EXTERNAL_KEY_STORE
{
"KeyMetadata": {
"AWSAccountId": "111111111111",
"KeyId": "a93f205a-2a37-4338-aa64-92b4a4b0b67d",
"Arn": "arn:aws:kms:us-east-2:111111111111:key/a93f205a-2a37-4338-aa64-92b4a4b0b67d",
"CreationDate": "2022-12-22T11:03:23.695000-08:00",
"Enabled": true,
"Description": "",
"KeyUsage": "ENCRYPT_DECRYPT",
"KeyState": "Enabled",
"Origin": "EXTERNAL_KEY_STORE",
"CustomKeyStoreId": "cks-16460f66b34705025",
"KeyManager": "CUSTOMER",
"CustomerMasterKeySpec": "SYMMETRIC_DEFAULT",
"KeySpec": "SYMMETRIC_DEFAULT",
"EncryptionAlgorithms": [
"SYMMETRIC_DEFAULT"
],
"MultiRegion": false,
"XksKeyConfiguration": {
"Id": "abc123"
}
}
}
```
1. Encrypt some data with this key:
```shell-session
$ aws kms encrypt --key-id a93f205a-2a37-4338-aa64-92b4a4b0b67d --plaintext YWJjMTIzCg==
{
"CiphertextBlob": "somerandomciphertextblob=",
"KeyId": "arn:aws:kms:us-east-2:111111111111:key/a93f205a-2a37-4338-aa64-92b4a4b0b67d",
"EncryptionAlgorithm": "SYMMETRIC_DEFAULT"
}
1. Decypt the resulting ciphertext:
```shell-session
$ aws kms decrypt --ciphertext-blob somerandomciphertextblob=
{
"KeyId": "arn:aws:kms:us-east-2:111111111111:key/a93f205a-2a37-4338-aa64-92b4a4b0b67d",
"Plaintext": "YWJjMTIzCg==",
"EncryptionAlgorithm": "SYMMETRIC_DEFAULT"
}
```
1. Optionally, clean up your key and key store with:
```shell-session
$ aws kms disable-key --key-id a93f205a-2a37-4338-aa64-92b4a4b0b67d
$ aws kms disconnect-custom-key-store --custom-key-store-id cks-16460f66b34705025
$ aws kms delete-custom-key-store --custom-key-store-id cks-16460f66b34705025
```
(The `aws kms delete-custom-key-store` command will not succeed until all keys in the key store have been disabled and deleted. | vault | layout docs page title AWS KMS External Key Store XKS PKCS 11 Provider Vault Enterprise description AWS KMS External Key Store can use Vault as a key store via the Vault PKCS 11 Provider Vault with AWS KMS external key store XKS via PKCS 11 and XKS proxy include alerts enterprise only mdx Note AWS xks proxy https github com aws samples aws kms xks proxy is used in this document as a sample implementation Vault s KMIP Secrets Engine can be used as an external key store for the AWS KMS External Key Store XKS https aws amazon com blogs aws announcing aws kms external key store xks protocol using the AWS xks proxy https github com aws samples aws kms xks proxy along with the Vault PKCS 11 Provider vault docs enterprise pkcs11 provider Overview This is tested as working with Vault 1 11 0 Enterprise and later with Advanced Data Protection KMIP support Prerequisites A server capable of running XKS Proxy on port 443 which is exposed to the Internet or a VPC endpoint This can be the same as the Vault server A valid DNS entry with a valid TLS certificate for XKS Proxy libvault pkcs11 so downloaded from releases hashicorp com https releases hashicorp com vault pkcs11 provider for your platform and available on the XKS Proxy server Vault Enterprise with the KMIP Secrets Engine available and with TCP port 5696 accessible to where XKS Proxy will be running There are 3 parts to this setup 1 Vault KMIP Secrets Engine standard setup There is nothing specific to XKS in this setup 1 Vault PKCS 11 setup to tell the PKCS 11 provider libvault pkcs11 so how to talk to the Vault KMIP Secrets Engine There is nothing specific to XKS in this setup 1 XKS Proxy setup Important XKS has a strict 250 ms latency requirement In order to serve requests with this latency we recommend hosting Vault and the XKS proxy as close as possible to the desired KMS region Vault setup On the Vault server we need to setup the KMIP Secrets Engine vault docs secrets kmip 1 Start the KMIP Secrets Engine vault docs secrets kmip and listener sh vault secrets enable kmip vault write kmip config listen addrs 0 0 0 0 5696 1 Create a KMIP scope to contain the AES keys that will be accessible The KMIP scope is essentially an isolated namespace Here is an example creating one called my service which will be used throughout this document sh vault write f kmip scope my service 1 Create a KMIP role that has access to the scope sh vault write kmip scope my service role admin operation all true 1 Create TLS credentials a certificate key and CA bundle for the KMIP role Note This command will output the credentials in plaintext sh vault write f format json kmip scope my service role admin credential generate tee kmip json The response from the credential generate endpoint is JSON The data certificate entry contains a bundle of the TLS client key and certificate we will use to connect to KMIP with from xks proxy The data ca chain entries contain the CA bundle to verify the KMIP server s certificate Save these to e g cert pem and ca pem sh jq raw output exit status data ca chain kmip json ca pem jq raw output exit status data certificate kmip json cert pem XKS proxy setup The rest of the steps take place on the XKS Proxy server For this example We will use an HTTPS proxy service like ngrok https ngrok com to forward connections to the XKS proxy This helps to quickly setup a valid domain and TLS endpoint for testing 1 Start ngrok shell session ngrok http 8000 This will output a domain that can be used to configure KMS later such as https example ngrok io 1 Copy the libvault pkcs11 so binary to the server such as usr local lib should be same as in the TOML config file below and chmod it so that it is executable 1 Copy the TLS certificate bundle e g etc kmip cert pem and CA bundle e g etc kmip ca pem to the xks proxy server doesn t matter where as long as the xks proxy process has access to it from the Vault setup 1 Create a configuration settings vault toml file for the XKS to Vault PKCS 11 configuration and set the XKS PROXY SETTINGS TOML environment variable to point to the file location The important settings to change external key stores change URI path prefix to anything you like choose random access ID choose random secret key set which key labels are accessible to XKS xks key id set pkcs11 set the PKCS11 HSM MODULE to the location of the libvault pkcs11 so or dylib file downloaded from releases hashicorp com https releases hashicorp com vault pkcs11 provider toml server ip 0 0 0 0 port 8000 region us east 2 service kms xks proxy server tcp keepalive tcp keepalive secs 60 tcp keepalive retries 3 tcp keepalive interval secs 1 tracing is stdout writer enabled true is file writer enabled true level DEBUG directory var local xks proxy logs file prefix xks proxy log rotation kind HOURLY security is sigv4 auth enabled true is tls enabled true is mtls enabled false tls tls cert pem tls server cert pem tls key pem tls server key pem mtls client ca pem tls client ca pem mtls client dns name us east 2 alpha cks kms aws internal amazonaws com external key stores uri path prefix xyz sigv4 access key id AKIA4GBY3I6JCE5M2HPM sigv4 secret access key 1234567890123456789012345678901234567890123 xks key id set abc123 pkcs11 session pool max size 30 session pool timeout milli 0 session eager close false user pin PKCS11 HSM MODULE usr local lib libvault pkcs11 so context read timeout milli 100 limits max plaintext in base64 8192 max aad in base64 16384 hsm capabilities can generate iv false is zero iv required false Note vault pkcs11 provider versions of 0 1 0 0 1 2 require the last two lines to be changed to can generate iv true and is zero iv required true 1 Create a file etc vault pkcs11 hcl with the following contents hcl slot server VAULT ADDRESS 5696 tls cert path etc kmip cert pem ca path etc kmip ca pem scope my service This file is used by libvault pkcs11 so to know how to find and communicate with the KMIP server See the Vault docs vault docs enterprise pkcs11 provider for all available parameters and their usage 1 If you want to view the Vault logs helpful when trying to find error messages you can specify the VAULT LOG FILE default is stdout and VAULT LOG LEVEL default is INFO We d recommend setting VAULT LOG FILE to something like tmp vault log or var log vault log Other useful log levels are WARN quieter and TRACE very verbose could possibly contain sensitive information like raw network packets 1 Create an AES 256 key in KMIP for example using pkcs11 tool usually installed with the OpenSC package See the Vault docs vault docs enterprise pkcs11 provider for the full setup sh VAULT LOG FILE dev null pkcs11 tool module libvault pkcs11 so keygen a abc123 key type AES 32 extractable allow sw Key generated Secret Key Object AES length 32 VALUE label abc123 Usage encrypt decrypt wrap unwrap Access none Enable XKS in the AWS CLI 1 Create the KMS custom key store with the appropriate parameters to point to your XKS proxy in this example through ngrok shell session aws kms create custom key store custom key store name myVaultKeyStore custom key store type EXTERNAL KEY STORE xks proxy uri endpoint https example ngrok io xks proxy uri path xyz kms xks v1 xks proxy authentication credential AccessKeyId AKIA4GBY3I6JCE5M2HPM RawSecretAccessKey 1234567890123456789012345678901234567890123 xks proxy connectivity PUBLIC ENDPOINT CustomKeyStoreId cks d7a55fe93d63191d6 1 Tell KMS to connect to the key store shell session aws kms connect custom key store custom key store id cks d7a55fe93d63191d6 1 Wait for the ConnectionState of your custom key store to be CONNECTED This can take a few minutes shell session aws kms describe custom key stores custom key store id cks d7a55fe93d63191d6 1 Create a KMS key associated with the XKS key ID abc123 in this example shell session aws kms create key custom key store id cks d7a55fe93d63191d6 xks key id abc123 origin EXTERNAL KEY STORE KeyMetadata AWSAccountId 111111111111 KeyId a93f205a 2a37 4338 aa64 92b4a4b0b67d Arn arn aws kms us east 2 111111111111 key a93f205a 2a37 4338 aa64 92b4a4b0b67d CreationDate 2022 12 22T11 03 23 695000 08 00 Enabled true Description KeyUsage ENCRYPT DECRYPT KeyState Enabled Origin EXTERNAL KEY STORE CustomKeyStoreId cks 16460f66b34705025 KeyManager CUSTOMER CustomerMasterKeySpec SYMMETRIC DEFAULT KeySpec SYMMETRIC DEFAULT EncryptionAlgorithms SYMMETRIC DEFAULT MultiRegion false XksKeyConfiguration Id abc123 1 Encrypt some data with this key shell session aws kms encrypt key id a93f205a 2a37 4338 aa64 92b4a4b0b67d plaintext YWJjMTIzCg CiphertextBlob somerandomciphertextblob KeyId arn aws kms us east 2 111111111111 key a93f205a 2a37 4338 aa64 92b4a4b0b67d EncryptionAlgorithm SYMMETRIC DEFAULT 1 Decypt the resulting ciphertext shell session aws kms decrypt ciphertext blob somerandomciphertextblob KeyId arn aws kms us east 2 111111111111 key a93f205a 2a37 4338 aa64 92b4a4b0b67d Plaintext YWJjMTIzCg EncryptionAlgorithm SYMMETRIC DEFAULT 1 Optionally clean up your key and key store with shell session aws kms disable key key id a93f205a 2a37 4338 aa64 92b4a4b0b67d aws kms disconnect custom key store custom key store id cks 16460f66b34705025 aws kms delete custom key store custom key store id cks 16460f66b34705025 The aws kms delete custom key store command will not succeed until all keys in the key store have been disabled and deleted |
vault Oracle TDE include alerts enterprise only mdx page title Oracle TDE PKCS 11 Provider Vault Enterprise layout docs The Vault PKCS 11 Provider can be used to enable Oracle TDE | ---
layout: docs
page_title: Oracle TDE - PKCS#11 Provider - Vault Enterprise
description: |-
The Vault PKCS#11 Provider can be used to enable Oracle TDE.
---
# Oracle TDE
@include 'alerts/enterprise-only.mdx'
[Oracle Transparent Data Encryption](https://docs.oracle.com/en/database/oracle/oracle-database/19/asoag/introduction-to-transparent-data-encryption.html) (TDE)
is supported with the [Vault PKCS#11 provider](/vault/docs/enterprise/pkcs11-provider).
In this setup, Vault's KMIP engine generates and store the "TDE Master Encryption Key" that the Oracle Database uses to encrypt and decrypt the "TDE Table Keys".
Oracle will not have access to the TDE Master Encryption Key itself.
## Requirements
To setup Oracle TDE backed by Vault, the following are required:
- A database running Oracle 19c Enterprise Edition
- A Vault Enterprise 1.11+ server with Advanced Data Protection for KMIP support.
- Vault has TCP port 5696 accessible to the Oracle database.
- `libvault-pkcs11.so` downloaded from [releases.hashicorp.com](https://releases.hashicorp.com/vault-pkcs11-provider) for the operating system running the Oracle database.
## Vault setup
On the Vault server, we need to [setup the KMIP Secrets Engine](/vault/docs/secrets/kmip):
1. Start the KMIP Secrets Engine and listener:
```sh
vault secrets enable kmip
vault write kmip/config listen_addrs=0.0.0.0:5696
```
~> **Important**: When configuring KMIP for Oracle, you will probably need to set the
`server_hostnames` and `server_ips` [configuration parameters](/vault/api-docs/secret/kmip#parameters),
otherwise the TLS connection to the KMIP Secrets Engine will fail due to certification validation errors.
When configuring Oracle TDE, this error can manifest as the `sqlplus` session silently hanging.
1. Create a KMIP scope to contain the TDE keys and objects.
The KMIP scope is essentially an isolated namespace.
For example, you can create a scope called `my-service`:
```sh
vault write -f kmip/scope/my-service
```
1. Create a KMIP role that has access to the scope:
```sh
vault write kmip/scope/my-service/role/admin operation_all=true
```
1. Create TLS credentials (a certificate, key, and CA bundle) for the KMIP role:
~> **Note**: This command will output the credentials in plaintext.
```sh
vault write -f -format=json kmip/scope/my-service/role/admin/credential/generate | tee kmip.json
```
The response from the `credential/generate` endpoint is JSON.
The `.data.certificate` entry contains a bundle of the TLS client key and certificate we will use to connect to KMIP with from Oracle.
The `.data.ca_chain[]` entries contain the CA bundle to verify the KMIP server's certificate.
Save these to, e.g., `cert.pem` and `ca.pem`:
```sh
jq --raw-output --exit-status '.data.ca_chain[]' kmip.json > ca.pem
jq --raw-output --exit-status '.data.certificate' kmip.json > cert.pem
```
## Oracle TDE preparation
The rest of the steps take place on the Oracle server.
We need to configure the Vault PKCS#11 provider.
1. Copy the `libvault-pkcs11.so` binary into `$ORACLE_BASE/extapi/64/hsm`, and ensure there are no other PKCS#11 libraries in `$ORACLE_BASE/extapi/64/hsm`.
1. Copy the TLS certificate and key bundle (e.g., `/etc/cert.pem`) and CA bundle (e.g., `/etc/ca.pem`) for the KMIP role (configured as above) to the Oracle server.
The exact location does not matter as long as the Oracle process has access to it.
1. Create a configuration file, for example `/etc/vault-pkcs11.hcl`, with the following contents:
```hcl
slot {
server = "VAULT_ADDRESS:5696"
tls_cert_path = "/etc/cert.pem"
ca_path = "/etc/ca.pem"
scope = "my-service"
}
```
This file is used by `libvault-pkcs11.so` to know how to find and communicate with the KMIP engine in Vault.
In particular:
- The `slot` block configures the first PKCS#11 slot to point to Vault. Oracle will use this first slot.
- `server` should point to the Vault server's IP (or DNS name) and port number (5696 is the default).
- `tls_cert_path` should be the location on the Oracle database of the client TLS certificate and key bundle used to connect to Vault server.
- `ca_path` should be the location of the CA bundle on the Oracle database.
- `scope` is the KMIP scope to authenticate against and where the TDE master keys and associated metadata will be stored.
The default location the PKCS#11 library will look for the configuration file is the current directory (`./vault-pkcs11.hcl`) and `/etc/vault-pkcs11.hcl`, but you can override this by setting the `VAULT_KMIP_CONFIG` environment variable to any file.
1. If you want to view the Vault logs (helpful when trying to find error messages), you can specify the `VAULT_LOG_FILE` (default is stdout) and `VAULT_LOG_LEVEL` (default is `INFO`). We'd recommend setting `VAULT_LOG_FILE` to something like `/tmp/vault.log` or `/var/log/vault.log`. Other useful log levels are `WARN` (quieter) and `TRACE` (verbose, could possibly contain sensitive information, like raw network packets).
## Enable TDE
The only remaining step is to setup Oracle TDE for an external HSM using shared library, `libvault-pkcs11.so`.
These steps are not specific to Vault, other than requiring the shared library, HCL configuration, and certificates be present.
TDE is complex, but an example way to enable it is:
1. Open a `sqlplus` session into the root container (or switch into it with `ALTER SESSION SET CONTAINER = CDB$ROOT;`).
1. Set WALLET_ROOT and TDE_CONFIGURATION parameters on the Oracle database. The wallet root directory is only used to set the TDE configuration parameter. To learn more about the wallet parameters refer to the [Oracle TDE documentation](https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/TDE_CONFIGURATION.html).
```sql
SQL> alter system set wallet_root='/opt/oracle/admin/ORCLCDB/wallet' scope=spfile;
SQL> shutdown immediate;
SQL> startup;
SQL> alter system set TDE_CONFIGURATION="KEYSTORE_CONFIGURATION=HSM" SCOPE=both;
```
1. Validate the parameters are set by querying `V$PARAMETER`
```sql
SQL> SELECT name, value from V$PARAMETER WHERE NAME IN ('wallet_root','tde_configuration');
NAME VALUE
------------------------------ --------------------------------------------------
wallet_root /opt/oracle/admin/ORCLCDB/wallet
tde_configuration KEYSTORE_CONFIGURATION=HSM
```
1. Open the HSM wallet: `ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "1234" CONTAINER = ALL;`.
The password `1234` here is used as the password for decrypting the TLS key, if it is stored encrypted on disk.
If the TLS key is not encrypted, this password is ignored.
1. Create the TDE master key: `ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY USING TAG 'default' IDENTIFIED BY "1234" CONTAINER = ALL;`, again specifying the TLS key password if necessary.
1. Finally, use TDE in a PDB, e.g., `CREATE TABLE test_tde (something CHAR(32) ENCRYPT);`.
More extensive information on the details and procedures for Oracle TDE can be found in [Oracle's documentation](https://docs.oracle.com/en/database/oracle/oracle-database/19/asoag/configuring-transparent-data-encryption.html#GUID-753C4808-CC51-4DA1-A5C3-980417FDAB14). | vault | layout docs page title Oracle TDE PKCS 11 Provider Vault Enterprise description The Vault PKCS 11 Provider can be used to enable Oracle TDE Oracle TDE include alerts enterprise only mdx Oracle Transparent Data Encryption https docs oracle com en database oracle oracle database 19 asoag introduction to transparent data encryption html TDE is supported with the Vault PKCS 11 provider vault docs enterprise pkcs11 provider In this setup Vault s KMIP engine generates and store the TDE Master Encryption Key that the Oracle Database uses to encrypt and decrypt the TDE Table Keys Oracle will not have access to the TDE Master Encryption Key itself Requirements To setup Oracle TDE backed by Vault the following are required A database running Oracle 19c Enterprise Edition A Vault Enterprise 1 11 server with Advanced Data Protection for KMIP support Vault has TCP port 5696 accessible to the Oracle database libvault pkcs11 so downloaded from releases hashicorp com https releases hashicorp com vault pkcs11 provider for the operating system running the Oracle database Vault setup On the Vault server we need to setup the KMIP Secrets Engine vault docs secrets kmip 1 Start the KMIP Secrets Engine and listener sh vault secrets enable kmip vault write kmip config listen addrs 0 0 0 0 5696 Important When configuring KMIP for Oracle you will probably need to set the server hostnames and server ips configuration parameters vault api docs secret kmip parameters otherwise the TLS connection to the KMIP Secrets Engine will fail due to certification validation errors When configuring Oracle TDE this error can manifest as the sqlplus session silently hanging 1 Create a KMIP scope to contain the TDE keys and objects The KMIP scope is essentially an isolated namespace For example you can create a scope called my service sh vault write f kmip scope my service 1 Create a KMIP role that has access to the scope sh vault write kmip scope my service role admin operation all true 1 Create TLS credentials a certificate key and CA bundle for the KMIP role Note This command will output the credentials in plaintext sh vault write f format json kmip scope my service role admin credential generate tee kmip json The response from the credential generate endpoint is JSON The data certificate entry contains a bundle of the TLS client key and certificate we will use to connect to KMIP with from Oracle The data ca chain entries contain the CA bundle to verify the KMIP server s certificate Save these to e g cert pem and ca pem sh jq raw output exit status data ca chain kmip json ca pem jq raw output exit status data certificate kmip json cert pem Oracle TDE preparation The rest of the steps take place on the Oracle server We need to configure the Vault PKCS 11 provider 1 Copy the libvault pkcs11 so binary into ORACLE BASE extapi 64 hsm and ensure there are no other PKCS 11 libraries in ORACLE BASE extapi 64 hsm 1 Copy the TLS certificate and key bundle e g etc cert pem and CA bundle e g etc ca pem for the KMIP role configured as above to the Oracle server The exact location does not matter as long as the Oracle process has access to it 1 Create a configuration file for example etc vault pkcs11 hcl with the following contents hcl slot server VAULT ADDRESS 5696 tls cert path etc cert pem ca path etc ca pem scope my service This file is used by libvault pkcs11 so to know how to find and communicate with the KMIP engine in Vault In particular The slot block configures the first PKCS 11 slot to point to Vault Oracle will use this first slot server should point to the Vault server s IP or DNS name and port number 5696 is the default tls cert path should be the location on the Oracle database of the client TLS certificate and key bundle used to connect to Vault server ca path should be the location of the CA bundle on the Oracle database scope is the KMIP scope to authenticate against and where the TDE master keys and associated metadata will be stored The default location the PKCS 11 library will look for the configuration file is the current directory vault pkcs11 hcl and etc vault pkcs11 hcl but you can override this by setting the VAULT KMIP CONFIG environment variable to any file 1 If you want to view the Vault logs helpful when trying to find error messages you can specify the VAULT LOG FILE default is stdout and VAULT LOG LEVEL default is INFO We d recommend setting VAULT LOG FILE to something like tmp vault log or var log vault log Other useful log levels are WARN quieter and TRACE verbose could possibly contain sensitive information like raw network packets Enable TDE The only remaining step is to setup Oracle TDE for an external HSM using shared library libvault pkcs11 so These steps are not specific to Vault other than requiring the shared library HCL configuration and certificates be present TDE is complex but an example way to enable it is 1 Open a sqlplus session into the root container or switch into it with ALTER SESSION SET CONTAINER CDB ROOT 1 Set WALLET ROOT and TDE CONFIGURATION parameters on the Oracle database The wallet root directory is only used to set the TDE configuration parameter To learn more about the wallet parameters refer to the Oracle TDE documentation https docs oracle com en database oracle oracle database 19 refrn TDE CONFIGURATION html sql SQL alter system set wallet root opt oracle admin ORCLCDB wallet scope spfile SQL shutdown immediate SQL startup SQL alter system set TDE CONFIGURATION KEYSTORE CONFIGURATION HSM SCOPE both 1 Validate the parameters are set by querying V PARAMETER sql SQL SELECT name value from V PARAMETER WHERE NAME IN wallet root tde configuration NAME VALUE wallet root opt oracle admin ORCLCDB wallet tde configuration KEYSTORE CONFIGURATION HSM 1 Open the HSM wallet ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY 1234 CONTAINER ALL The password 1234 here is used as the password for decrypting the TLS key if it is stored encrypted on disk If the TLS key is not encrypted this password is ignored 1 Create the TDE master key ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY USING TAG default IDENTIFIED BY 1234 CONTAINER ALL again specifying the TLS key password if necessary 1 Finally use TDE in a PDB e g CREATE TABLE test tde something CHAR 32 ENCRYPT More extensive information on the details and procedures for Oracle TDE can be found in Oracle s documentation https docs oracle com en database oracle oracle database 19 asoag configuring transparent data encryption html GUID 753C4808 CC51 4DA1 A5C3 980417FDAB14 |
vault FIPS 140 2 inside the Vault binary This can directly be used for FIPS compliance page title Vault Enterprise FIPS 140 2 Inside layout docs Vault Enterprise features a special build with FIPS 140 2 support built into | ---
layout: docs
page_title: Vault Enterprise FIPS 140-2 Inside
description: |-
Vault Enterprise features a special build with FIPS 140-2 support built into
the Vault binary. This can directly be used for FIPS compliance.
---
# FIPS 140-2 inside
@include 'alerts/enterprise-only.mdx'
Special builds of Vault Enterprise (marked with a `fips1402` feature name)
include built-in support for FIPS 140-2 compliance. Unlike using Seal Wrap
for FIPS compliance, this binary has no external dependencies on a HSM.
To use this feature, you must have an active or trial license for Vault
Enterprise Plus (HSMs). To start a trial, contact [HashiCorp
sales](mailto:[email protected]).
## Using FIPS 140-2 Vault enterprise
FIPS 140-2 Inside versions of Vault Enterprise behave like non-FIPS versions
of Vault. No restrictions are placed on algorithms; it is up to the operator
to ensure Vault remains in a FIPS-compliant mode of operation. This means
configuring some Secrets Engines to permit a limited set of algorithms (e.g.,
forbidding ed25519-based CAs with PKI Secrets Engines).
Because Vault Enterprise may return secrets in plain text, it is important to
ensure the Vault server's `listener` configuration section utilizes TLS. This
ensures secrets are transmitted securely from Server to Client. Additionally,
note that TLSv1.3 will not work with FIPS 140-2 Inside, as HKDF is not a
certified primitive. If TLSv1.3 is desired, it is suggested to front Vault
Server with a FIPS-certified load balancer.
A non-exhaustive list of potential compliance issues include:
- Using Ed25519 or ChaCha20+Poly1305 keys with the Transit Secrets Engine,
- Using Ed25519 keys as CAs in the PKI or SSH Secrets Engines,
- Using FF3-1/FPE in Transform Secrets Engine, or
- Using a Derived Key (using HKDF) for Agent auto-authing or the Transit
Secrets Engine.
- Using **Entropy Augmentation**: because BoringCrypto uses its internal,
FIPS 140-2 approved RNG, it cannot mix entropy from other sources.
Attempting to use EA with FIPS 140-2 HSM enabled binaries will result
in failures such as `panic: boringcrypto: invalid code execution`.
Hashicorp can only provide general guidance regarding using Vault Enterprise
in a FIPS-compliant manner. We are not a NIST-certified testing laboratory
and thus organizations may need to consult an approved auditor for final
information.
The FIPS 140-2 variant of Vault uses separate binaries; these are available
from the following sources:
- From the [Hashicorp Releases Page](https://releases.hashicorp.com/vault),
ending with the `+ent.fips1402` and `+ent.hsm.fips1402` suffixes.
- From the [Docker Hub `hashicorp/vault-enterprise-fips`](https://hub.docker.com/r/hashicorp/vault-enterprise-fips)
container repository.
- From the [AWS ECR `hashicorp/vault-enterprise-fips`](https://gallery.ecr.aws/hashicorp/vault-enterprise-fips)
container repository.
- From the [Red Hat Access `hashicorp/vault-enterprise-fips`](https://catalog.redhat.com/software/containers/hashicorp/vault-enterprise-fips/628d50e37ff70c66a88517ea)
container repository.
~> **Note**: When pulling the FIPS UBI-based images, note that they are
ultimately designed for OpenShift certification; consider either adding
the `--user root --cap-add IPC_LOCK` options, to allow Vault to enable
mlock, or use the `--env SKIP_SETCAP=1` option, to disable mlock
completely, as appropriate for your environment.
### Usage restrictions
#### Migration restrictions
Hashicorp **does not** support in-place migrations from non-FIPS Inside
versions of Vault to FIPS Inside versions of Vault, regardless of version.
A fresh cluster installation is required to receive support. We generally
recommend avoiding direct upgrades and replicated-migrations for several
reasons:
- Old entries remain encrypted with the old barrier key until overwritten,
this barrier key was likely not created by a FIPS library and thus
is not compliant.
- Many secrets engines internally create keys; things like Transit create
and store keys, but don't store any data (inside of Vault) -- these would
still need to be accessible and rotated to a new, FIPS-compliant key.
Any PKI engines would have also created non-compliant keys, but rotation
of say, a Root CA involves a concerted, non-Vault effort to accomplish
and must be done thoughtfully.
As such Hashicorp cannot provide support for workloads that are affected
either technically or via non-compliance that results from converting
existing cluster workloads to the FIPS 140-2 Inside binary.
Instead, we suggest leaving the existing cluster in place, and carefully
consider migration of specific workloads to the FIPS-backed cluster.
#### Entropy augmentation restrictions
Entropy Augmentation **does not** work with FIPS 140-2 Inside. The internal
BoringCrypto RNG is FIPS 140-2 certified and does not accept entropy from
other sources. On Vault 1.11.0 and later, attempting to use Entropy
Augmentation will result in a warning ("Entropy Augmentation is not supported...")
and Entropy Augmentation will be disabled.
#### TLS restrictions
Vault Enterprise's FIPS modifications include restrictions to supported TLS
cipher suites and key information. Only the following cipher suites are
allowed:
- `TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`,
- `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`,
- `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`,
- `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`,
- `TLS_RSA_WITH_AES_128_GCM_SHA256`, and
- `TLS_RSA_WITH_AES_256_GCM_SHA384`.
Additionally, only the following key types are allowed in TLS chains of trust:
- RSA 2048, 3072, 4096, 7680, and 8192-bit;
- ECDSA P-256, P-384, and P-521.
Finally, only TLSv1.2 or higher is supported in FIPS mode. These are in line
with recent NIST guidance and recommendations.
#### Heterogeneous cluster deployments
Hashicorp does not support mixed deployment scenarios within the same Vault
cluster, e.g., mixing FIPS and non-FIPS Vault binary versions, or mixing FIPS
Inside with FIPS Seal Wrap clusters. Clusters nodes must be of a single
binary/deployment type across the entire cluster. Usage of Seal Wrap with
the FIPS Inside binary is permitted.
Running a heterogeneous cluster is not permitted by FIPS, as components of
the system are not compliant with FIPS.
## Technical details
Vault Enterprise's FIPS 140-2 Inside binaries rely on a special version of the
Go toolchain which include a FIPS-validated BoringCrypto version. To ensure
your version of Vault Enterprise includes FIPS support, after starting the
server, make sure you see a line with `Fips: Enabled`, such as:
```
Fips: FIPS 140-2 Enabled, BoringCrypto version 7
```
~> **Note**: FIPS 140-2 Inside binaries depend on cgo, which require that a
GNU C Library (glibc) Linux distribution be used to run Vault. We've
additionally opted to certify only on the AMD64 architecture at this time.
This means these binaries will not work on Alpine Linux based containers.
### FIPS 140-2 inside and external plugins
Vault Enterprise's built-in plugins are compiled into the Vault binary using
the same Go toolchain version that compiled the core Vault; this results in
these plugins having FIPS 140-2 compliance status as well. This same guarantee
does not apply to external plugins.
### Validating FIPS 140-2 inside
To validate that the FIPS 140-2 Inside binary correctly includes BoringCrypto,
run `go tool nm` on the binary to get a symbol dump. On non-FIPS builds,
searching for `goboringcrypto` in the output will yield no results, but on
FIPS-enabled builds, you'll see many results with this:
```
$ go tool nm vault | grep -i goboringcrypto
4014d0 T _cgo_6880f0fbb71e_Cfunc__goboringcrypto_AES_cbc_encrypt
4014f0 T _cgo_6880f0fbb71e_Cfunc__goboringcrypto_AES_ctr128_encrypt
401520 T _cgo_6880f0fbb71e_Cfunc__goboringcrypto_AES_decrypt
401540 T _cgo_6880f0fbb71e_Cfunc__goboringcrypto_AES_encrypt
401560 T _cgo_6880f0fbb71e_Cfunc__goboringcrypto_AES_set_decrypt_key
...additional lines elided...
```
All FIPS cryptographic modules must execute startup tests. BoringCrypto uses
the `_goboringcrypto_BORINGSSL_bcm_power_on_self_test` symbol for this. To
ensure the Vault Enterprise binary is correctly executing startup tests, use
[GDB](https://www.sourceware.org/gdb/) to stop execution on this function to
ensure it gets hit.
```
$ gdb --args vault server -dev
...GDB startup messages elided...
(gdb) break _goboringcrypto_BORINGSSL_bcm_power_on_self_test
...breakpoint location elided...
(gdb) run
...additional GDB output elided...
Thread 1 "vault" hit Breakpoint 1, 0x0000000000454950 in _goboringcrypto_BORINGSSL_bcm_power_on_self_test ()
(gdb) backtrace
#0 0x0000000000454950 in _goboringcrypto_BORINGSSL_bcm_power_on_self_test ()
#1 0x00000000005da8f0 in runtime.asmcgocall () at /usr/local/hashicorp-fips-go-devel/src/runtime/asm_amd64.s:765
#2 0x00007fffd07a5a18 in ?? ()
#3 0x00007fffffffdf28 in ?? ()
#4 0x000000000057ebce in runtime.persistentalloc.func1 () at /usr/local/hashicorp-fips-go-devel/src/runtime/malloc.go:1371
#5 0x00000000005d8a49 in runtime.systemstack () at /usr/local/hashicorp-fips-go-devel/src/runtime/asm_amd64.s:383
#6 0x00000000005dd189 in runtime.newproc (siz=6129989, fn=0x5d88fb <runtime.rt0_go+315>) at <autogenerated>:1
#7 0x0000000000000000 in ?? ()
```
Exact output may vary.
<div {...{"className":"alert alert-warning g-type-body"}}>
**Note**: When executing Vault Enterprise within GDB, GDB must rewrite
parts of the binary to permit stopping on the specified breakpoint. This
results in the HMAC of the contained BoringCrypto library changing,
breaking the FIPS integrity check. If execution were to be continued
in the example above via the `continue` command, a message like the
following would be emitted:
```
Continuing.
FIPS integrity test failed.
Expected: 18d35ae031f649825a4269d68d2e62583d060a31d359690f97b9c8bf8120cdf75b405f74be7018094da7eb5261f2f86d0f481cc3b5a9c7c432268d94bf91aad9
Calculated: 111502a3201de3b23f54b29d79ca6a1a754f94ecfc57a379444aac0d3ada68bf3c06834e6d84e68599bdf763e28e2c994fcdaeac84adabd180b59cad5fc980bb
Thread 1 "vault" received signal SIGABRT, Aborted.
```
This is expected. Rerunning Vault without GDB (or with no breakpoints
set -- e.g., `delete 1`) will still result in this function executing, but
with the FIPS integrity check succeeding.
</div>
### BoringCrypto certification
BoringCrypto Version 7 uses the following FIPS 140-2 Certificate and software
version:
- NIST CMVP [Certificate #3678](https://csrc.nist.gov/Projects/Cryptographic-Module-Validation-Program/Certificate/3678).
- BoringSSL Release [`ae223d6138807a13006342edfeef32e813246b39`](https://github.com/google/boringssl/commit/ae223d6138807a13006342edfeef32e813246b39).
The following algorithms were certified as part of this release:
- RSA in all key sizes greater than or equal to 2048 bits (tested at 2048
and 3072 bits),
- ECDSA and ECDH with P-224 (not accessible from Vault), P-256, P-384, and
P-521,
- AES symmetric encryption with 128/192/256-bit CBC, ECB, and CRT modes and
128/256-bit GCM modes,
- SHA-1 and SHA-2 (224, 256,384, and 512-bit variants),
- HMAC+SHA-2 with 224, 256, 384, and 512-bit variants of SHA-2,
- TLSv1.0/TLSv1.1 and TLSv1.2 KDFs,
- AES-256 based CRT_DRBG CS-PRNG.
### Leidos compliance
See the updated [Leidos Compliance Letter (V Entr v1.10.0+entrfips) for FIPS Inside](https://www.datocms-assets.com/2885/1653327036-boringcrypto_compliance_letter_signed.pdf) using the Boring Crypto Libraries for more details. All [past letters](https://www.hashicorp.com/vault-compliance) are also available for reference.
What is the difference between Seal Wrap FIPS 140 compliance and the new FIPS Inside compliance?
- Only the storage of sensitive entries (seal wrapped entries) is covered by FIPS-validated crypto when using Seal Wrapping.
- The TLS connection to Vault by clients is not covered by FIPS-validated crypto when using Seal Wrapping (it is when using FIPS 140-2 Inside, per items 1, 2, 7, and 13 in the updated letter).
- The generation of key material wasn't using FIPS-validated crypto in the Seal Wrap version (for example, the PKI certificates: item 8 in the updated FIPS 140-2 Inside letter; or SSH module: item 10 in the updated FIPS 140-2 Inside letter).
- With Seal Wrapping, some entries were protected with FIPS-validated crypto, but all crypto in Vault wasn't FIPS certified. With FIPS 140-2 Inside, by default (if the algorithm is certified), Vault will be using the certified crypto implementation. | vault | layout docs page title Vault Enterprise FIPS 140 2 Inside description Vault Enterprise features a special build with FIPS 140 2 support built into the Vault binary This can directly be used for FIPS compliance FIPS 140 2 inside include alerts enterprise only mdx Special builds of Vault Enterprise marked with a fips1402 feature name include built in support for FIPS 140 2 compliance Unlike using Seal Wrap for FIPS compliance this binary has no external dependencies on a HSM To use this feature you must have an active or trial license for Vault Enterprise Plus HSMs To start a trial contact HashiCorp sales mailto sales hashicorp com Using FIPS 140 2 Vault enterprise FIPS 140 2 Inside versions of Vault Enterprise behave like non FIPS versions of Vault No restrictions are placed on algorithms it is up to the operator to ensure Vault remains in a FIPS compliant mode of operation This means configuring some Secrets Engines to permit a limited set of algorithms e g forbidding ed25519 based CAs with PKI Secrets Engines Because Vault Enterprise may return secrets in plain text it is important to ensure the Vault server s listener configuration section utilizes TLS This ensures secrets are transmitted securely from Server to Client Additionally note that TLSv1 3 will not work with FIPS 140 2 Inside as HKDF is not a certified primitive If TLSv1 3 is desired it is suggested to front Vault Server with a FIPS certified load balancer A non exhaustive list of potential compliance issues include Using Ed25519 or ChaCha20 Poly1305 keys with the Transit Secrets Engine Using Ed25519 keys as CAs in the PKI or SSH Secrets Engines Using FF3 1 FPE in Transform Secrets Engine or Using a Derived Key using HKDF for Agent auto authing or the Transit Secrets Engine Using Entropy Augmentation because BoringCrypto uses its internal FIPS 140 2 approved RNG it cannot mix entropy from other sources Attempting to use EA with FIPS 140 2 HSM enabled binaries will result in failures such as panic boringcrypto invalid code execution Hashicorp can only provide general guidance regarding using Vault Enterprise in a FIPS compliant manner We are not a NIST certified testing laboratory and thus organizations may need to consult an approved auditor for final information The FIPS 140 2 variant of Vault uses separate binaries these are available from the following sources From the Hashicorp Releases Page https releases hashicorp com vault ending with the ent fips1402 and ent hsm fips1402 suffixes From the Docker Hub hashicorp vault enterprise fips https hub docker com r hashicorp vault enterprise fips container repository From the AWS ECR hashicorp vault enterprise fips https gallery ecr aws hashicorp vault enterprise fips container repository From the Red Hat Access hashicorp vault enterprise fips https catalog redhat com software containers hashicorp vault enterprise fips 628d50e37ff70c66a88517ea container repository Note When pulling the FIPS UBI based images note that they are ultimately designed for OpenShift certification consider either adding the user root cap add IPC LOCK options to allow Vault to enable mlock or use the env SKIP SETCAP 1 option to disable mlock completely as appropriate for your environment Usage restrictions Migration restrictions Hashicorp does not support in place migrations from non FIPS Inside versions of Vault to FIPS Inside versions of Vault regardless of version A fresh cluster installation is required to receive support We generally recommend avoiding direct upgrades and replicated migrations for several reasons Old entries remain encrypted with the old barrier key until overwritten this barrier key was likely not created by a FIPS library and thus is not compliant Many secrets engines internally create keys things like Transit create and store keys but don t store any data inside of Vault these would still need to be accessible and rotated to a new FIPS compliant key Any PKI engines would have also created non compliant keys but rotation of say a Root CA involves a concerted non Vault effort to accomplish and must be done thoughtfully As such Hashicorp cannot provide support for workloads that are affected either technically or via non compliance that results from converting existing cluster workloads to the FIPS 140 2 Inside binary Instead we suggest leaving the existing cluster in place and carefully consider migration of specific workloads to the FIPS backed cluster Entropy augmentation restrictions Entropy Augmentation does not work with FIPS 140 2 Inside The internal BoringCrypto RNG is FIPS 140 2 certified and does not accept entropy from other sources On Vault 1 11 0 and later attempting to use Entropy Augmentation will result in a warning Entropy Augmentation is not supported and Entropy Augmentation will be disabled TLS restrictions Vault Enterprise s FIPS modifications include restrictions to supported TLS cipher suites and key information Only the following cipher suites are allowed TLS ECDHE RSA WITH AES 128 GCM SHA256 TLS ECDHE RSA WITH AES 256 GCM SHA384 TLS ECDHE ECDSA WITH AES 128 GCM SHA256 TLS ECDHE ECDSA WITH AES 256 GCM SHA384 TLS RSA WITH AES 128 GCM SHA256 and TLS RSA WITH AES 256 GCM SHA384 Additionally only the following key types are allowed in TLS chains of trust RSA 2048 3072 4096 7680 and 8192 bit ECDSA P 256 P 384 and P 521 Finally only TLSv1 2 or higher is supported in FIPS mode These are in line with recent NIST guidance and recommendations Heterogeneous cluster deployments Hashicorp does not support mixed deployment scenarios within the same Vault cluster e g mixing FIPS and non FIPS Vault binary versions or mixing FIPS Inside with FIPS Seal Wrap clusters Clusters nodes must be of a single binary deployment type across the entire cluster Usage of Seal Wrap with the FIPS Inside binary is permitted Running a heterogeneous cluster is not permitted by FIPS as components of the system are not compliant with FIPS Technical details Vault Enterprise s FIPS 140 2 Inside binaries rely on a special version of the Go toolchain which include a FIPS validated BoringCrypto version To ensure your version of Vault Enterprise includes FIPS support after starting the server make sure you see a line with Fips Enabled such as Fips FIPS 140 2 Enabled BoringCrypto version 7 Note FIPS 140 2 Inside binaries depend on cgo which require that a GNU C Library glibc Linux distribution be used to run Vault We ve additionally opted to certify only on the AMD64 architecture at this time This means these binaries will not work on Alpine Linux based containers FIPS 140 2 inside and external plugins Vault Enterprise s built in plugins are compiled into the Vault binary using the same Go toolchain version that compiled the core Vault this results in these plugins having FIPS 140 2 compliance status as well This same guarantee does not apply to external plugins Validating FIPS 140 2 inside To validate that the FIPS 140 2 Inside binary correctly includes BoringCrypto run go tool nm on the binary to get a symbol dump On non FIPS builds searching for goboringcrypto in the output will yield no results but on FIPS enabled builds you ll see many results with this go tool nm vault grep i goboringcrypto 4014d0 T cgo 6880f0fbb71e Cfunc goboringcrypto AES cbc encrypt 4014f0 T cgo 6880f0fbb71e Cfunc goboringcrypto AES ctr128 encrypt 401520 T cgo 6880f0fbb71e Cfunc goboringcrypto AES decrypt 401540 T cgo 6880f0fbb71e Cfunc goboringcrypto AES encrypt 401560 T cgo 6880f0fbb71e Cfunc goboringcrypto AES set decrypt key additional lines elided All FIPS cryptographic modules must execute startup tests BoringCrypto uses the goboringcrypto BORINGSSL bcm power on self test symbol for this To ensure the Vault Enterprise binary is correctly executing startup tests use GDB https www sourceware org gdb to stop execution on this function to ensure it gets hit gdb args vault server dev GDB startup messages elided gdb break goboringcrypto BORINGSSL bcm power on self test breakpoint location elided gdb run additional GDB output elided Thread 1 vault hit Breakpoint 1 0x0000000000454950 in goboringcrypto BORINGSSL bcm power on self test gdb backtrace 0 0x0000000000454950 in goboringcrypto BORINGSSL bcm power on self test 1 0x00000000005da8f0 in runtime asmcgocall at usr local hashicorp fips go devel src runtime asm amd64 s 765 2 0x00007fffd07a5a18 in 3 0x00007fffffffdf28 in 4 0x000000000057ebce in runtime persistentalloc func1 at usr local hashicorp fips go devel src runtime malloc go 1371 5 0x00000000005d8a49 in runtime systemstack at usr local hashicorp fips go devel src runtime asm amd64 s 383 6 0x00000000005dd189 in runtime newproc siz 6129989 fn 0x5d88fb runtime rt0 go 315 at autogenerated 1 7 0x0000000000000000 in Exact output may vary div className alert alert warning g type body Note When executing Vault Enterprise within GDB GDB must rewrite parts of the binary to permit stopping on the specified breakpoint This results in the HMAC of the contained BoringCrypto library changing breaking the FIPS integrity check If execution were to be continued in the example above via the continue command a message like the following would be emitted Continuing FIPS integrity test failed Expected 18d35ae031f649825a4269d68d2e62583d060a31d359690f97b9c8bf8120cdf75b405f74be7018094da7eb5261f2f86d0f481cc3b5a9c7c432268d94bf91aad9 Calculated 111502a3201de3b23f54b29d79ca6a1a754f94ecfc57a379444aac0d3ada68bf3c06834e6d84e68599bdf763e28e2c994fcdaeac84adabd180b59cad5fc980bb Thread 1 vault received signal SIGABRT Aborted This is expected Rerunning Vault without GDB or with no breakpoints set e g delete 1 will still result in this function executing but with the FIPS integrity check succeeding div BoringCrypto certification BoringCrypto Version 7 uses the following FIPS 140 2 Certificate and software version NIST CMVP Certificate 3678 https csrc nist gov Projects Cryptographic Module Validation Program Certificate 3678 BoringSSL Release ae223d6138807a13006342edfeef32e813246b39 https github com google boringssl commit ae223d6138807a13006342edfeef32e813246b39 The following algorithms were certified as part of this release RSA in all key sizes greater than or equal to 2048 bits tested at 2048 and 3072 bits ECDSA and ECDH with P 224 not accessible from Vault P 256 P 384 and P 521 AES symmetric encryption with 128 192 256 bit CBC ECB and CRT modes and 128 256 bit GCM modes SHA 1 and SHA 2 224 256 384 and 512 bit variants HMAC SHA 2 with 224 256 384 and 512 bit variants of SHA 2 TLSv1 0 TLSv1 1 and TLSv1 2 KDFs AES 256 based CRT DRBG CS PRNG Leidos compliance See the updated Leidos Compliance Letter V Entr v1 10 0 entrfips for FIPS Inside https www datocms assets com 2885 1653327036 boringcrypto compliance letter signed pdf using the Boring Crypto Libraries for more details All past letters https www hashicorp com vault compliance are also available for reference What is the difference between Seal Wrap FIPS 140 compliance and the new FIPS Inside compliance Only the storage of sensitive entries seal wrapped entries is covered by FIPS validated crypto when using Seal Wrapping The TLS connection to Vault by clients is not covered by FIPS validated crypto when using Seal Wrapping it is when using FIPS 140 2 Inside per items 1 2 7 and 13 in the updated letter The generation of key material wasn t using FIPS validated crypto in the Seal Wrap version for example the PKI certificates item 8 in the updated FIPS 140 2 Inside letter or SSH module item 10 in the updated FIPS 140 2 Inside letter With Seal Wrapping some entries were protected with FIPS validated crypto but all crypto in Vault wasn t FIPS certified With FIPS 140 2 Inside by default if the algorithm is certified Vault will be using the certified crypto implementation |
vault Userpass auth method username and password page title Userpass Auth Methods The userpass auth method allows users to authenticate with Vault using a layout docs | ---
layout: docs
page_title: Userpass - Auth Methods
description: >-
The "userpass" auth method allows users to authenticate with Vault using a
username and password.
---
# Userpass auth method
The `userpass` auth method allows users to authenticate with Vault using
a username and password combination.
The username/password combinations are configured directly to the auth
method using the `users/` path. This method cannot read usernames and
passwords from an external source.
The method lowercases all submitted usernames, e.g. `Mary` and `mary` are the
same entry.
This documentation assumes the Username & Password method is mounted at the default `/auth/userpass`
path in Vault. Since it is possible to enable auth methods at any location,
please update your CLI calls accordingly with the `-path` flag.
## Authentication
### Via the CLI
```shell-session
$ vault login -method=userpass \
username=mitchellh \
password=foo
```
### Via the API
```shell-session
$ curl \
--request POST \
--data '{"password": "foo"}' \
http://127.0.0.1:8200/v1/auth/userpass/login/mitchellh
```
The response will contain the token at `auth.client_token`:
```json
{
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": null,
"auth": {
"client_token": "c4f280f6-fdb2-18eb-89d3-589e2e834cdb",
"policies": ["admins"],
"metadata": {
"username": "mitchellh"
},
"lease_duration": 0,
"renewable": false
}
}
```
## Configuration
Auth methods must be configured in advance before users or machines can
authenticate. These steps are usually completed by an operator or configuration
management tool.
1. Enable the userpass auth method:
```shell-session
$ vault auth enable userpass
```
Enable the `userpass` auth method at the default `auth/userpass` path.
You can choose to enable the auth method at a different path with the `-path` flag:
```shell-session
$ vault auth enable -path=<path> userpass
```
1. Configure it with users that are allowed to authenticate:
```shell-session
$ vault write auth/<userpass:path>/users/mitchellh \
password=foo \
policies=admins
```
This creates a new user "mitchellh" with the password "foo" that will be
associated with the "admins" policy. This is the only configuration
necessary.
## User lockout
@include 'user-lockout.mdx'
## API
The Userpass auth method has a full HTTP API. Please see the [Userpass auth
method API](/vault/api-docs/auth/userpass) for more details. | vault | layout docs page title Userpass Auth Methods description The userpass auth method allows users to authenticate with Vault using a username and password Userpass auth method The userpass auth method allows users to authenticate with Vault using a username and password combination The username password combinations are configured directly to the auth method using the users path This method cannot read usernames and passwords from an external source The method lowercases all submitted usernames e g Mary and mary are the same entry This documentation assumes the Username Password method is mounted at the default auth userpass path in Vault Since it is possible to enable auth methods at any location please update your CLI calls accordingly with the path flag Authentication Via the CLI shell session vault login method userpass username mitchellh password foo Via the API shell session curl request POST data password foo http 127 0 0 1 8200 v1 auth userpass login mitchellh The response will contain the token at auth client token json lease id renewable false lease duration 0 data null auth client token c4f280f6 fdb2 18eb 89d3 589e2e834cdb policies admins metadata username mitchellh lease duration 0 renewable false Configuration Auth methods must be configured in advance before users or machines can authenticate These steps are usually completed by an operator or configuration management tool 1 Enable the userpass auth method shell session vault auth enable userpass Enable the userpass auth method at the default auth userpass path You can choose to enable the auth method at a different path with the path flag shell session vault auth enable path path userpass 1 Configure it with users that are allowed to authenticate shell session vault write auth userpass path users mitchellh password foo policies admins This creates a new user mitchellh with the password foo that will be associated with the admins policy This is the only configuration necessary User lockout include user lockout mdx API The Userpass auth method has a full HTTP API Please see the Userpass auth method API vault api docs auth userpass for more details |
vault OCI Identity credentials page title OCI Auth method layout docs OCI auth method The OCI Auth method for Vault enables authentication and authorization using | ---
layout: docs
page_title: OCI Auth method
description: >-
The OCI Auth method for Vault enables authentication and authorization using
OCI Identity credentials.
---
# OCI auth method
The OCI Auth method for Vault enables authentication and authorization using [OCI Identity](https://docs.cloud.oracle.com/iaas/Content/Identity/Concepts/overview.htm) credentials.
This plugin is developed in a separate GitHub repository at https://github.com/hashicorp/vault-plugin-auth-oci,
but is automatically bundled in Vault releases. Please file all feature requests, bugs, and pull requests
specific to the OCI plugin under that repository.
## OCI roles
The OCI Auth method authorizes using roles, as shown here:

There is a many-to-many relationship between various items seen above:
- A user can belong to many identity groups.
- An identity group can contain many users.
- A compute instance can belong to many dynamic groups.
- A dynamic group can contain many compute instances.
- A role defined in Vault can be mapped to many groups and dynamic groups.
- A single role can be mapped to both groups and dynamic groups.
- A Vault policy can be mapped from different roles.
The `ocid_list` field of a role is a list of [Group or Dynamic Group](https://docs.cloud.oracle.com/iaas/Content/Identity/Concepts/overview.htm#one) OCIDs. Only members of these Groups or Dynamic Groups are allowed to take this role.
## Configuration
### Configure the OCI tenancy to run Vault
The OCI Auth method requires [instance principal](https://blogs.oracle.com/cloud-infrastructure/announcing-instance-principals-for-identity-and-access-management) credentials to call OCI Identity APIs, and therefore the Vault server needs to run inside an OCI compute instance.
Follow the steps below to add policies to your tenancy that allow the OCI compute instance in which the Vault server is running to call certain OCI Identity APIs.
1. In your tenancy, [launch the compute instance(s)](https://docs.cloud.oracle.com/iaas/Content/Compute/Tasks/launchinginstance.htm) that will run the Vault server. The [VCN](https://docs.cloud.oracle.com/iaas/Content/Network/Tasks/managingVCNs.htm) in which you launch the Compute Instance should have a [Service Gateway](https://docs.cloud.oracle.com/iaas/Content/Network/Tasks/servicegateway.htm) added to it .
1. Make a note of the Oracle Cloud Identifier (OCID) of the compute instance(s) running Vault.
1. In your tenancy, [create a dynamic group](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingdynamicgroups.htm) with the name VaultDynamicGroup to contain the computer instance(s).
1. Add the OCID of the compute instance(s) to the dynamic group.
1. Add the following policies to the root compartment of your tenancy that allow the dynamic group to call specific Identity APIs.
```plaintext
allow dynamic-group VaultDynamicGroup to {AUTHENTICATION_INSPECT} in tenancy
allow dynamic-group VaultDynamicGroup to {GROUP_MEMBERSHIP_INSPECT} in tenancy
```
### Configure the OCI auth method
First, enable the OCI Auth method.
```shell-session
$ vault auth enable oci
```
Then, configure your home tenancy in the Vault, so that only users or instances from your tenancy will be allowed to log into Vault through the OCI Auth method.
1. Create a file named `hometenancyid.json` with the below content using the
tenancy OCID. To find your tenancy OCID, see
the [Oracle Cloud IDs documentation](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/identifiers.htm).
```json
{ "home_tenancy_id": "your tenancy ocid here" }
```
1. Configure the `home_tenancy_id` parameter in the Vault.
```shell-session
$ curl --header "X-Vault-Token: $roottoken" --request POST \
--data @hometenancyid.json \
http://127.0.0.1:8200/v1/auth/oci/config (127.0.0.1:8200/v1/auth/oci/config)
```
Continue by creating a Vault administrator role in the OCI Auth method. The `vaultadminrole` allows the administrator of Vault to log into Vault and grants them the permissions allowed in the policy.
1. Create a file named `vaultadminrole.json` with the below contents. Replace the `ocid_list` with the
Group or Dynamic Group OCIDs in your tenancy that has users or instances that you want to take the Vault admin role.
- For testing in dev mode, you can add the OCID of the dynamic group previously created.
- In production, add only the OCID of groups and dynamic groups that can take the admin role in Vault.
```json
{
"token_policies": "vaultadminpolicy",
"token_ttl": "1800",
"ocid_list": "ocid1.group.oc1..aaaaaaaaiqnblimpvmegkqh3bxilrdvjobr7qd223g275idcqhexamplefq,ocid1.dynamicgroup.oc1..aaaaaaaa5hmfyrdaxvmt52ekju5n7ffamn2pdvxaq6esb2vzzoduexamplea"
}
```
1. Create the Vault admin role:
```shell-session
$ curl --header "X-Vault-Token: $roottoken" --request POST \
--data @vaultadminrole.json \
http://127.0.0.1:8200/v1/auth/oci/role/vaultadminrole (127.0.0.1:8200/v1/auth/oci/role/vaultadminrole)
```
### Log in to Vault using OCI auth
As a result of both methods described below, you will get a response that includes a token with the previously added policy.
You can use the received token to read or write secrets, and add roles per the instructions in [/docs/secrets/kv/kv-v1](/vault/docs/secrets/kv/kv-v1).
For both methods to work:
- The VAULT_ADDR export has to be specified, as shown earlier in this page; when testing in dev mode in the same compute instance that the Vault is running on, this is [http://127.0.0.1:8200](http://127.0.0.1:8200/).
#### Log in with instance principals
```shell-session
$ vault login -method=oci auth_type=instance role=vaultadminrole
```
This assumes that the compute instance that you are logging in from should be a part of a dynamic group that was added to the Vault admin role. If logging on from a different compute instance than the one on which Vault is running on, the compute should have connectivity to the endpoint specified in VAULT_ADDR.
#### Log in with an API key
```shell-session
$ vault login -method=oci auth_type=apikey role=vaultadminrole
```
This assumes you have an OCI API key.
If you don't have an API key:
1. [Add an API Key](https://docs.cloud.oracle.com/iaas/Content/API/Concepts/apisigningkey.htm) for a user in the console. This user should be part of a group that has previously been added to the Vault admin role.
1. Create the config file `~/.oci/config` using the user's credentials as detailed in [https://docs.cloud.oracle.com/iaas/Content/API/Concepts/sdkconfig.htm](https://docs.cloud.oracle.com/iaas/Content/API/Concepts/sdkconfig.htm).
1. Ensure that the region in the config matches the region of the compute instance that is running Vault.
### Manage roles in the OCI auth method
1. Similar to creating the Vault administrator role, create other roles mapped to other policies. Create a file named devrole.json with the following contents. Replace ocid_list with Groups or Dynamic Groups in your tenancy.
```json
{
"token_policies": "devpolicy",
"token_ttl": "1500",
"ocid_list": "ocid1.group.oc1..aaaaaaaaiqnblimpvmgrouplrdvjobr7qd223g275idcqhexamplefq,ocid1.dynamicgroup.oc1..aaaaaaaa5hmfyrdaxvmdg2u5n7ffamn2pdvxaq6esb2vzzoduexamplea"
}
```
2. Add the role.
```shell-session
$ curl --header "X-Vault-Token: $token" --request POST \
--data @devrole.json \
http://127.0.0.1:8200/v1/auth/oci/role/devrole (127.0.0.1:8200/v1/auth/oci/role/devrole)
```
3. Login to Vault assuming the devrole.
```shell-session
$ vault login -method=oci auth_type=instance role=devrole
```
## Authentication
You can authenticate with the Vault CLI or by communicating with the API directly.
### Via the CLI
With Compute Instance credentials:
```shell-session
$ vault login -method=oci auth_type=instance role=devrole
```
With User credentials ([SDK configuration](https://docs.cloud.oracle.com/iaas/Content/API/Concepts/sdkconfig.htm)):
```shell-session
$ vault login -method=oci auth_type=apikey role=devrole
```
### Via the API
1. Sign the following request with your OCI credentials and obtain the signing string and the authorization header. Replace the endpoint, scheme (http or https) & role of the URL corresponding to your Vault configuration. For more information on signing, see [signing the request](https://docs.cloud.oracle.com/iaas/Content/API/Concepts/signingrequests.htm).
http://127.0.0.1/v1/auth/oci/login/devrole
1. On signing the above request, you will get the following headers.
The signing string (line breaks inserted into the (request-target) header for easier reading):
<CodeBlockConfig hideClipboard>
```text
date: Fri, 22 Aug 2019 21:02:19 GMT
(request-target): get /v1/auth/oci/login/devrole
host: 127.0.0.1
```
</CodeBlockConfig>
The Authorization header:
<CodeBlockConfig hideClipboard>
```text
Signature version="1",headers="date (request-target) host",keyId="ocid1.t
enancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq/
ocid1.user.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3ryn
jq/73:61:a2:21:67:e0:df:be:7e:4b:93:1e:15:98:a5:b7",algorithm="rsa-sha256
",signature="GBas7grhyrhSKHP6AVIj/h5/Vp8bd/peM79H9Wv8kjoaCivujVXlpbKLjMPe
DUhxkFIWtTtLBj3sUzaFj34XE6YZAHc9r2DmE4pMwOAy/kiITcZxa1oHPOeRheC0jP2dqbTll
8fmTZVwKZOKHYPtrLJIJQHJjNvxFWeHQjMaR7M="
```
</CodeBlockConfig>
1. Add the signed headers to the "request_headers" field and make the actual request to Vault. For example:
<CodeBlockConfig hideClipboard>
```sh
POST http://127.0.0.1/v1/auth/oci/login/devrole
"request_headers": {
"date": ["Fri, 22 Aug 2019 21:02:19 GMT"],
"(request-target)": ["get /v1/auth/oci/login/devrole"],
"host": ["127.0.0.1"],
"content-type": ["application/json"],
"authorization": ["Signature algorithm=\"rsa-sha256\",headers=\"date (request-target) host\",keyId=\"ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq/ocid1.user.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq/73:61:a2:21:67:e0:df:be:7e:4b:93:1e:15:98:a5:b7\",signature=\"GBas7grhyrhSKHP6AVIj/h5/Vp8bd/peM79H9Wv8kjoaCivujVXlpbKLjMPeDUhxkFIWtTtLBj3sUzaFj34XE6YZAHc9r2DmE4pMwOAy/kiITcZxa1oHPOeRheC0jP2dqbTll8fmTZVwKZOKHYPtrLJIJQHJjNvxFWeHQjMaR7M=\",version=\"1\""]
}
```
</CodeBlockConfig>
## API
The OCI Auth method has a full HTTP API. Please see the [API docs](/vault/api-docs/auth/oci) for more details | vault | layout docs page title OCI Auth method description The OCI Auth method for Vault enables authentication and authorization using OCI Identity credentials OCI auth method The OCI Auth method for Vault enables authentication and authorization using OCI Identity https docs cloud oracle com iaas Content Identity Concepts overview htm credentials This plugin is developed in a separate GitHub repository at https github com hashicorp vault plugin auth oci but is automatically bundled in Vault releases Please file all feature requests bugs and pull requests specific to the OCI plugin under that repository OCI roles The OCI Auth method authorizes using roles as shown here Role Based Authorization img oci oci role based authz png There is a many to many relationship between various items seen above A user can belong to many identity groups An identity group can contain many users A compute instance can belong to many dynamic groups A dynamic group can contain many compute instances A role defined in Vault can be mapped to many groups and dynamic groups A single role can be mapped to both groups and dynamic groups A Vault policy can be mapped from different roles The ocid list field of a role is a list of Group or Dynamic Group https docs cloud oracle com iaas Content Identity Concepts overview htm one OCIDs Only members of these Groups or Dynamic Groups are allowed to take this role Configuration Configure the OCI tenancy to run Vault The OCI Auth method requires instance principal https blogs oracle com cloud infrastructure announcing instance principals for identity and access management credentials to call OCI Identity APIs and therefore the Vault server needs to run inside an OCI compute instance Follow the steps below to add policies to your tenancy that allow the OCI compute instance in which the Vault server is running to call certain OCI Identity APIs 1 In your tenancy launch the compute instance s https docs cloud oracle com iaas Content Compute Tasks launchinginstance htm that will run the Vault server The VCN https docs cloud oracle com iaas Content Network Tasks managingVCNs htm in which you launch the Compute Instance should have a Service Gateway https docs cloud oracle com iaas Content Network Tasks servicegateway htm added to it 1 Make a note of the Oracle Cloud Identifier OCID of the compute instance s running Vault 1 In your tenancy create a dynamic group https docs cloud oracle com iaas Content Identity Tasks managingdynamicgroups htm with the name VaultDynamicGroup to contain the computer instance s 1 Add the OCID of the compute instance s to the dynamic group 1 Add the following policies to the root compartment of your tenancy that allow the dynamic group to call specific Identity APIs plaintext allow dynamic group VaultDynamicGroup to AUTHENTICATION INSPECT in tenancy allow dynamic group VaultDynamicGroup to GROUP MEMBERSHIP INSPECT in tenancy Configure the OCI auth method First enable the OCI Auth method shell session vault auth enable oci Then configure your home tenancy in the Vault so that only users or instances from your tenancy will be allowed to log into Vault through the OCI Auth method 1 Create a file named hometenancyid json with the below content using the tenancy OCID To find your tenancy OCID see the Oracle Cloud IDs documentation https docs cloud oracle com iaas Content General Concepts identifiers htm json home tenancy id your tenancy ocid here 1 Configure the home tenancy id parameter in the Vault shell session curl header X Vault Token roottoken request POST data hometenancyid json http 127 0 0 1 8200 v1 auth oci config 127 0 0 1 8200 v1 auth oci config Continue by creating a Vault administrator role in the OCI Auth method The vaultadminrole allows the administrator of Vault to log into Vault and grants them the permissions allowed in the policy 1 Create a file named vaultadminrole json with the below contents Replace the ocid list with the Group or Dynamic Group OCIDs in your tenancy that has users or instances that you want to take the Vault admin role For testing in dev mode you can add the OCID of the dynamic group previously created In production add only the OCID of groups and dynamic groups that can take the admin role in Vault json token policies vaultadminpolicy token ttl 1800 ocid list ocid1 group oc1 aaaaaaaaiqnblimpvmegkqh3bxilrdvjobr7qd223g275idcqhexamplefq ocid1 dynamicgroup oc1 aaaaaaaa5hmfyrdaxvmt52ekju5n7ffamn2pdvxaq6esb2vzzoduexamplea 1 Create the Vault admin role shell session curl header X Vault Token roottoken request POST data vaultadminrole json http 127 0 0 1 8200 v1 auth oci role vaultadminrole 127 0 0 1 8200 v1 auth oci role vaultadminrole Log in to Vault using OCI auth As a result of both methods described below you will get a response that includes a token with the previously added policy You can use the received token to read or write secrets and add roles per the instructions in docs secrets kv kv v1 vault docs secrets kv kv v1 For both methods to work The VAULT ADDR export has to be specified as shown earlier in this page when testing in dev mode in the same compute instance that the Vault is running on this is http 127 0 0 1 8200 http 127 0 0 1 8200 Log in with instance principals shell session vault login method oci auth type instance role vaultadminrole This assumes that the compute instance that you are logging in from should be a part of a dynamic group that was added to the Vault admin role If logging on from a different compute instance than the one on which Vault is running on the compute should have connectivity to the endpoint specified in VAULT ADDR Log in with an API key shell session vault login method oci auth type apikey role vaultadminrole This assumes you have an OCI API key If you don t have an API key 1 Add an API Key https docs cloud oracle com iaas Content API Concepts apisigningkey htm for a user in the console This user should be part of a group that has previously been added to the Vault admin role 1 Create the config file oci config using the user s credentials as detailed in https docs cloud oracle com iaas Content API Concepts sdkconfig htm https docs cloud oracle com iaas Content API Concepts sdkconfig htm 1 Ensure that the region in the config matches the region of the compute instance that is running Vault Manage roles in the OCI auth method 1 Similar to creating the Vault administrator role create other roles mapped to other policies Create a file named devrole json with the following contents Replace ocid list with Groups or Dynamic Groups in your tenancy json token policies devpolicy token ttl 1500 ocid list ocid1 group oc1 aaaaaaaaiqnblimpvmgrouplrdvjobr7qd223g275idcqhexamplefq ocid1 dynamicgroup oc1 aaaaaaaa5hmfyrdaxvmdg2u5n7ffamn2pdvxaq6esb2vzzoduexamplea 2 Add the role shell session curl header X Vault Token token request POST data devrole json http 127 0 0 1 8200 v1 auth oci role devrole 127 0 0 1 8200 v1 auth oci role devrole 3 Login to Vault assuming the devrole shell session vault login method oci auth type instance role devrole Authentication You can authenticate with the Vault CLI or by communicating with the API directly Via the CLI With Compute Instance credentials shell session vault login method oci auth type instance role devrole With User credentials SDK configuration https docs cloud oracle com iaas Content API Concepts sdkconfig htm shell session vault login method oci auth type apikey role devrole Via the API 1 Sign the following request with your OCI credentials and obtain the signing string and the authorization header Replace the endpoint scheme http or https role of the URL corresponding to your Vault configuration For more information on signing see signing the request https docs cloud oracle com iaas Content API Concepts signingrequests htm http 127 0 0 1 v1 auth oci login devrole 1 On signing the above request you will get the following headers The signing string line breaks inserted into the request target header for easier reading CodeBlockConfig hideClipboard text date Fri 22 Aug 2019 21 02 19 GMT request target get v1 auth oci login devrole host 127 0 0 1 CodeBlockConfig The Authorization header CodeBlockConfig hideClipboard text Signature version 1 headers date request target host keyId ocid1 t enancy oc1 aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq ocid1 user oc1 aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3ryn jq 73 61 a2 21 67 e0 df be 7e 4b 93 1e 15 98 a5 b7 algorithm rsa sha256 signature GBas7grhyrhSKHP6AVIj h5 Vp8bd peM79H9Wv8kjoaCivujVXlpbKLjMPe DUhxkFIWtTtLBj3sUzaFj34XE6YZAHc9r2DmE4pMwOAy kiITcZxa1oHPOeRheC0jP2dqbTll 8fmTZVwKZOKHYPtrLJIJQHJjNvxFWeHQjMaR7M CodeBlockConfig 1 Add the signed headers to the request headers field and make the actual request to Vault For example CodeBlockConfig hideClipboard sh POST http 127 0 0 1 v1 auth oci login devrole request headers date Fri 22 Aug 2019 21 02 19 GMT request target get v1 auth oci login devrole host 127 0 0 1 content type application json authorization Signature algorithm rsa sha256 headers date request target host keyId ocid1 tenancy oc1 aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq ocid1 user oc1 aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq 73 61 a2 21 67 e0 df be 7e 4b 93 1e 15 98 a5 b7 signature GBas7grhyrhSKHP6AVIj h5 Vp8bd peM79H9Wv8kjoaCivujVXlpbKLjMPeDUhxkFIWtTtLBj3sUzaFj34XE6YZAHc9r2DmE4pMwOAy kiITcZxa1oHPOeRheC0jP2dqbTll8fmTZVwKZOKHYPtrLJIJQHJjNvxFWeHQjMaR7M version 1 CodeBlockConfig API The OCI Auth method has a full HTTP API Please see the API docs vault api docs auth oci for more details |
vault Google Cloud auth method page title Google Cloud Auth Methods Google Cloud service accounts layout docs The gcp auth method allows users and machines to authenticate to Vault using | ---
layout: docs
page_title: Google Cloud - Auth Methods
description: |-
The "gcp" auth method allows users and machines to authenticate to Vault using
Google Cloud service accounts.
---
# Google Cloud auth method
The `gcp` auth method allows Google Cloud Platform entities to authenticate to
Vault. Vault treats Google Cloud as a trusted third party and verifies
authenticating entities against the Google Cloud APIs. This backend allows for
authentication of:
- Google Cloud IAM service accounts
- Google Compute Engine (GCE) instances
This backend focuses on identities specific to Google _Cloud_ and does not
support authenticating arbitrary Google or Google Workspace users or generic OAuth
against Google.
This plugin is developed in a separate GitHub repository at
[hashicorp/vault-plugin-auth-gcp][repo],
but is automatically bundled in Vault releases. Please file all feature
requests, bugs, and pull requests specific to the GCP plugin under that
repository.
## Authentication
### Via the CLI helper
Vault includes a CLI helper that obtains a signed JWT locally and sends the
request to Vault.
```shell-session
# Authentication to vault outside of Google Cloud
$ vault login -method=gcp \
role="my-role" \
service_account="[email protected]" \
jwt_exp="15m" \
credentials=@path/to/signer/credentials.json
```
```shell-session
# Authentication to vault inside of Google Cloud
$ vault login -method=gcp role="my-role"
```
For more usage information, run `vault auth help gcp`.
-> **Note:** The `project` parameter has been removed in Vault 1.5.9+, 1.6.5+, and 1.7.2+.
It is no longer needed for configuration and will be ignored if provided.
### Via the CLI
```shell-session
$ vault write -field=token auth/gcp/login \
role="my-role" \
jwt="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
```
See [Generating JWTs](#generating-jwts) for ways to obtain the JWT token.
### Via the API
```shell-session
$ curl \
--request POST \
--data '{"role":"my-role", "jwt":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."}' \
http://127.0.0.1:8200/v1/auth/gcp/login
```
See [API docs][api-docs] for expected response.
## Configuration
Auth methods must be configured in advance before users or machines can
authenticate. These steps are usually completed by an operator or configuration
management tool.
1. Enable the Google Cloud auth method:
```shell-session
$ vault auth enable gcp
```
1. Configure the auth method credentials if Vault is not running on Google Cloud:
```text
$ vault write auth/gcp/config \
credentials=@/path/to/credentials.json
```
If you are using instance credentials or want to specify credentials via
an environment variable, you can skip this step. To learn more, see the
[Google Cloud Credentials](#gcp-credentials) section below.
-> **Note**: If you're using a [Private Google Access](https://cloud.google.com/vpc/docs/configure-private-google-access)
environment, you will additionally need to configure your environment’s custom endpoints
via the [custom_endpoint](/vault/api-docs/auth/gcp#custom_endpoint) configuration parameter.
In some cases, you cannot set sensitive IAM security credentials in your
Vault configuration. For example, your organization may require that all
security credentials are short-lived or explicitly tied to a machine identity.
To provide IAM security credentials to Vault, we recommend using Vault
[plugin workload identity federation](#plugin-workload-identity-federation-wif)
(WIF) as shown below.
1. Alternatively, configure the audience claim value and the service account email to assume for plugin workload identity federation:
```text
$ vault write auth/gcp/config \
identity_token_audience="<TOKEN AUDIENCE>" \
service_account_email="<SERVICE ACCOUNT EMAIL>"
```
Vault's identity token provider signs the plugin identity token JWT internally.
If a trust relationship exists between Vault and GCP through WIF, the auth
method can exchange the Vault identity token for a
[federated access token](https://cloud.google.com/docs/authentication/token-types#access).
To configure a trusted relationship between Vault and GCP:
- You must configure the [identity token issuer backend](/vault/api-docs/secret/identity/tokens#configure-the-identity-tokens-backend)
for Vault.
- GCP must have a
[workload identity pool and provider](https://cloud.google.com/iam/docs/manage-workload-identity-pools-providers)
configured with information about the fully qualified and network-reachable
issuer URL for the Vault plugin's
[identity token provider](/vault/api-docs/secret/identity/tokens#read-plugin-identity-well-known-configurations).
Establishing a trusted relationship between Vault and GCP ensures that GCP
can fetch JWKS
[public keys](/vault/api-docs/secret/identity/tokens#read-active-public-keys)
and verify the plugin identity token signature.
1. Create a named role:
For an `iam`-type role:
```shell-session
$ vault write auth/gcp/role/my-iam-role \
type="iam" \
policies="dev,prod" \
bound_service_accounts="[email protected]"
```
For a `gce`-type role:
```shell-session
$ vault write auth/gcp/role/my-gce-role \
type="gce" \
policies="dev,prod" \
bound_projects="my-project1,my-project2" \
bound_zones="us-east1-b" \
bound_labels="foo:bar,zip:zap" \
bound_service_accounts="[email protected]"
```
Note that `bound_service_accounts` is only required for `iam`-type roles.
For the complete list of configuration options for each type, please see the
[API documentation][api-docs].
## GCP credentials
The Google Cloud Vault auth method uses the official Google Cloud Golang SDK.
This means it supports the common ways of [providing credentials to Google
Cloud][cloud-creds].
1. The environment variable `GOOGLE_APPLICATION_CREDENTIALS`. This is specified
as the **path** to a Google Cloud credentials file, typically for a service
account. If this environment variable is present, the resulting credentials are
used. If the credentials are invalid, an error is returned.
1. Default instance credentials. When no environment variable is present, the
default service account credentials are used.
For more information on service accounts, please see the [Google Cloud Service
Accounts documentation][service-accounts].
To use this auth method, the service account must have the following minimum
scope:
```text
https://www.googleapis.com/auth/cloud-platform
```
### Required GCP permissions
#### Enabled GCP APIs
The GCP project must have the following APIs enabled:
- [iam.googleapis.com](https://console.cloud.google.com/flows/enableapi?apiid=iam.googleapis.com)
for `iam` and `gce` type roles.
- [compute.googleapis.com](https://console.cloud.google.com/flows/enableapi?apiid=compute.googleapis.com)
for `gce` type roles.
- [cloudresourcemanager.googleapis.com](https://console.cloud.google.com/flows/enableapi?apiid=cloudresourcemanager.googleapis.com)
for `iam` and `gce` type roles that set [`add_group_aliases`](/vault/api-docs/auth/gcp#add_group_aliases) to true.
#### Vault server permissions
**For `iam`-type Vault roles**, the service account [`credentials`](/vault/api-docs/auth/gcp#credentials)
given to Vault can have the following role:
```text
roles/iam.serviceAccountKeyAdmin
```
**For `gce`-type Vault roles**, the service account [`credentials`](/vault/api-docs/auth/gcp#credentials)
given to Vault can have the following role:
```text
roles/compute.viewer
```
If you instead wish to create a custom role with only the exact GCP permissions
required, use the following list of permissions:
```text
iam.serviceAccounts.get
iam.serviceAccountKeys.get
compute.instances.get
compute.instanceGroups.list
```
These allow Vault to:
- verify the service account, either directly authenticating or associated with
authenticating GCE instance, exists
- get the corresponding public keys for verifying JWTs signed by service account
private keys.
- verify authenticating GCE instances exist
- compare bound fields for GCE roles (zone/region, labels, or membership
in given instance groups)
If you are using Group Aliases as described below, you will also need to add the
`resourcemanager.projects.get` permission.
#### Permissions for authenticating against Vault
If you are authenticating to Vault from Google Cloud, you can skip the following step as
Vault will generate and present the identity token of the service account configured
on the instance or the pod.
Note that the previously mentioned permissions are given to the _Vault servers_.
The IAM service account or GCE instance that is **authenticating against Vault**
must have the following role:
```text
roles/iam.serviceAccountTokenCreator
```
!> **WARNING:** Make sure this role is only applied so your service account can
impersonate itself. If this role is applied GCP project-wide, this will allow the service
account to impersonate any service account in the GCP project where it resides.
See [Managing service account impersonation](https://cloud.google.com/iam/docs/impersonating-service-accounts)
for more information.
## Plugin Workload Identity Federation (WIF)
<EnterpriseAlert product="vault" />
The GCP auth method supports the plugin WIF workflow and has a source of identity called
a plugin identity token. A plugin identity token is a JWT that is signed internally by the Vault
[plugin identity token issuer](/vault/api-docs/secret/identity/tokens#read-plugin-workload-identity-issuer-s-openid-configuration).
If there is a trust relationship configured between Vault and GCP through
[workload identity federation](https://cloud.google.com/iam/docs/workload-identity-federation),
the auth method can exchange its identity token for short-lived access tokens needed to
perform its actions.
Exchanging identity tokens for access tokens lets the GCP auth method
operate without configuring explicit access to sensitive IAM security
credentials.
To configure the auth method to use plugin WIF:
1. Ensure that Vault [openid-configuration](/vault/api-docs/secret/identity/tokens#read-plugin-identity-token-issuer-s-openid-configuration)
and [public JWKS](/vault/api-docs/secret/identity/tokens#read-plugin-identity-token-issuer-s-public-jwks)
APIs are network-reachable by GCP. We recommend using an API proxy or gateway
if you need to limit Vault API exposure.
1. Create a
[workload identity pool and provider](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#create-pool-provider)
in GCP.
1. The provider URL **must** point at your [Vault plugin identity token issuer](/vault/api-docs/secret/identity/tokens#read-plugin-workload-identity-issuer-s-openid-configuration) with the
`/.well-known/openid-configuration` suffix removed. For example:
`https://host:port/v1/identity/oidc/plugins`.
1. Uniquely identify the recipient of the plugin identity token as the audience.
You can use the [default audience](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#prepare)
for the identity pool or a custom value less than 256 characters.
1. [Authenticate a workload](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#authenticate)
in GCP by granting the identity pool access to a dedicated service account using service account impersonation.
Filter requests using the unique `sub` claim issued by plugin identity tokens so the GCP Auth method can
impersonate the service account. `sub` claims have the form: `plugin-identity:<NAMESPACE>:auth:<GCP_AUTH_MOUNT_ACCESSOR>`.
1. Configure the GCP auth method with the OIDC audience value and service account
email.
```shell-session
$ vault write auth/gcp/config \
identity_token_audience="//iam.googleapis.com/projects/410449834127/locations/global/workloadIdentityPools/vault-gcp-auth-43777a63/providers/vault-gcp-auth-wif-provider" \
service_account_email="vault-plugin-wif-auth@hc-b712f250b4e04cacbadd258a90b.iam.gserviceaccount.com"
```
Your auth method can now use plugin WIF for its configuration credentials.
By default, WIF [credentials](https://cloud.google.com/iam/docs/workload-identity-federation#access_management)
have a time-to-live of 1 hour and automatically refresh when they expire.
Please see the [API documentation](/vault/api-docs/auth/gcp#configure)
for more details on the fields associated with plugin WIF.
## Group aliases
As of Vault 1.0, roles can specify an `add_group_aliases` boolean parameter
that adds [group aliases][identity-group-aliases] to the auth response. These
aliases can aid in building reusable policies since they are available as
interpolated values in Vault's policy engine. Once enabled, the auth response
will include the following aliases:
```json
[
"project-$PROJECT_ID",
"folder-$SUBFOLDER_ID",
"folder-$FOLDER_ID",
"organization-$ORG_ID"
]
```
If you are using a custom role for Vault server, you will need to add the
`resourcemanager.projects.get` permission to your custom role.
## Implementation details
This section describes the implementation details for how Vault communicates
with Google Cloud to authenticate and authorize JWT tokens. This information is
provided for those who are curious, but these details are not
required knowledge for using the auth method.
### IAM login
IAM login applies only to roles of type `iam`. The Vault authentication workflow
for IAM service accounts looks like this:
[](/img/vault-gcp-iam-auth-workflow.svg)
1. The client generates a signed JWT using the Service Account Credentials
[`projects.serviceAccounts.signJwt`][signjwt-method] API method. For examples
of how to do this, see the [Generating JWTs](#generating-jwts) section.
2. The client sends this signed JWT to Vault along with a role name.
3. Vault extracts the `kid` header value, which contains the ID of the
key-pair used to generate the JWT, and the `sub` ID/email to find the service
account key. If the service account does not exist or the key is not linked to
the service account, Vault denies authentication.
4. Vault authorizes the confirmed service account against the given role. If
that is successful, a Vault token with the proper policies is returned.
### GCE login
GCE login only applies to roles of type `gce` and **must be completed on an
infrastructure running on Google Cloud**. These steps will not work from your
local laptop or another cloud provider.
[](/img/vault-gcp-gce-auth-workflow.svg)
1. The client obtains an [instance identity metadata token][instance-identity]
on a GCE instance.
2. The client sends this JWT to Vault along with a role name.
3. Vault extracts the `kid` header value, which contains the ID of the
key-pair used to generate the JWT, to find the OAuth2 public cert to verify
this JWT.
4. Vault authorizes the confirmed instance against the given role, ensuring
the instance matches the bound zones, regions, or instance groups. If that is
successful, a Vault token with the proper policies is returned.
## Generating JWTs
This section details the various methods and examples for obtaining JWT
tokens.
### Service account credentials API
This describes how to use the GCP Service Account Credentials [API method][signjwt-method]
directly to generate the signed JWT with the claims that Vault expects. Note the CLI
does this process for you and is much easier, and that there is very little
reason to do this yourself.
#### curl example
Vault requires the following minimum claim set:
```json
{
"sub": "$SERVICE_ACCOUNT_EMAIL_OR_ID",
"aud": "vault/$ROLE",
"exp": "$EXPIRATION"
}
```
For the API method, providing the expiration claim `exp` is required. If it is omitted,
it will not be added automatically and Vault will deny authentication. Expiration must
be specified as a [NumericDate](https://tools.ietf.org/html/rfc7519#section-2) value
(seconds from Epoch). This value must be before the max JWT expiration allowed for a
role. This defaults to 15 minutes and cannot be more than 1 hour.
If a user generates a token that expires after 15 minutes, and the gcp role has `max_jwt_exp` set to the default, Vault will return the following error: `Expiration date must be set to no more that 15 mins in JWT_CLAIM, otherwise the login request returns error "role requires that service account JWTs expire within 900 seconds`. In this case, the user must create a new signed JWT with a shorter expiration, or set `max_jwt_exp` to a higher value in the gcp role.
One you have all this information, the JWT token can be signed using curl and
[oauth2l](https://github.com/google/oauth2l):
```shell-session
ROLE="my-role"
SERVICE_ACCOUNT="[email protected]"
OAUTH_TOKEN="$(oauth2l header cloud-platform)"
EXPIRATION="<your_token_expiration>"
JWT_CLAIM="{\\\"aud\\\":\\\"vault/${ROLE}\\\", \\\"sub\\\": \\\"${SERVICE_ACCOUNT}\\\", \\\"exp\\\": ${EXPIRATION}}"
$ curl \
--header "${OAUTH_TOKEN}" \
--header "Content-Type: application/json" \
--request POST \
--data "{\"payload\": \"${JWT_CLAIM}\"}" \
"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/${SERVICE_ACCOUNT}:signJwt"
```
#### gcloud example
You can also do this through the (currently beta) gcloud command. Note that you will
be required to provide the expiration claim `exp` as a part of the JWT input to the
command.
```shell-session
$ gcloud beta iam service-accounts sign-jwt $INPUT_JWT_CLAIMS $OUTPUT_JWT_FILE \
[email protected] \
--project=my-project
```
#### Golang example
Read more on the
[Google Open Source blog](https://opensource.googleblog.com/2017/08/hashicorp-vault-and-google-cloud-iam.html).
### GCE
You can autogenerate this token in Vault versions 1.8.2 or higher.
GCE tokens **can only be generated from a GCE instance**.
1. Vault can automatically discover the identity token on a GCE/GKE instance. This simplifies
authenticating to Vault like so:
```shell-session
$ vault login \
-method=gcp \
role="my-gce-role"
```
1. The JWT token can also be obtained from the `"service-accounts/default/identity"` endpoint for a
instance's metadata server.
#### Curl example
```shell-session
ROLE="my-gce-role"
$ curl \
--header "Metadata-Flavor: Google" \
--get \
--data-urlencode "audience=http://vault/${ROLE}" \
--data-urlencode "format=full" \
"http://metadata/computeMetadata/v1/instance/service-accounts/default/identity"
```
## API
The GCP Auth Plugin has a full HTTP API. Please see the
[API docs][api-docs] for more details.
[jwt]: https://tools.ietf.org/html/rfc7519
[signjwt-method]: https://cloud.google.com/iam/docs/reference/credentials/rest/v1/projects.serviceAccounts/signJwt
[cloud-creds]: https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application
[service-accounts]: https://cloud.google.com/compute/docs/access/service-accounts
[api-docs]: /vault/api-docs/auth/gcp
[identity-group-aliases]: /vault/api-docs/secret/identity/group-alias
[instance-identity]: https://cloud.google.com/compute/docs/instances/verifying-instance-identity
[repo]: https://github.com/hashicorp/vault-plugin-auth-gcp
## Code example
The following example demonstrates the Google Cloud auth method to authenticate
with Vault.
<CodeTabs>
<CodeBlockConfig>
```go
package main
import (
"context"
"fmt"
"os"
vault "github.com/hashicorp/vault/api"
auth "github.com/hashicorp/vault/api/auth/gcp"
)
// Fetches a key-value secret (kv-v2) after authenticating to Vault
// via GCP IAM, one of two auth methods used to authenticate with
// GCP (the other is GCE auth).
func getSecretWithGCPAuthIAM() (string, error) {
config := vault.DefaultConfig() // modify for more granular configuration
client, err := vault.NewClient(config)
if err != nil {
return "", fmt.Errorf("unable to initialize Vault client: %w", err)
}
// For IAM-style auth, the environment variable GOOGLE_APPLICATION_CREDENTIALS
// must be set with the path to a valid credentials JSON file, otherwise
// Vault will fall back to Google's default instance credentials.
// Learn about authenticating to GCS with service account credentials at https://cloud.google.com/docs/authentication/production
if pathToCreds := os.Getenv("GOOGLE_APPLICATION_CREDENTIALS"); pathToCreds == "" {
fmt.Printf("WARNING: Environment variable GOOGLE_APPLICATION_CREDENTIALS was not set. IAM client for JWT signing and Vault server IAM client will both fall back to default instance credentials.\n")
}
svcAccountEmail := fmt.Sprintf("%s@%s.iam.gserviceaccount.com", os.Getenv("GCP_SERVICE_ACCOUNT_NAME"), os.Getenv("GOOGLE_CLOUD_PROJECT"))
// We pass the auth.WithIAMAuth option to use the IAM-style authentication
// of the GCP auth backend. Otherwise, we default to using GCE-style
// authentication, which gets its credentials from the metadata server.
gcpAuth, err := auth.NewGCPAuth(
"dev-role-iam",
auth.WithIAMAuth(svcAccountEmail),
)
if err != nil {
return "", fmt.Errorf("unable to initialize GCP auth method: %w", err)
}
authInfo, err := client.Auth().Login(context.TODO(), gcpAuth)
if err != nil {
return "", fmt.Errorf("unable to login to GCP auth method: %w", err)
}
if authInfo == nil {
return "", fmt.Errorf("login response did not return client token")
}
// get secret from the default mount path for KV v2 in dev mode, "secret"
secret, err := client.KVv2("secret").Get(context.Background(), "creds")
if err != nil {
return "", fmt.Errorf("unable to read secret: %w", err)
}
// data map can contain more than one key-value pair,
// in this case we're just grabbing one of them
value, ok := secret.Data["password"].(string)
if !ok {
return "", fmt.Errorf("value type assertion failed: %T %#v", secret.Data["password"], secret.Data["password"])
}
return value, nil
}
```
</CodeBlockConfig>
<CodeBlockConfig>
```cs
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading.Tasks;
using Google.Apis.Auth.OAuth2;
using Google.Apis.Services;
using Google.Apis.Iam.v1;
using Newtonsoft.Json;
using VaultSharp;
using VaultSharp.V1.AuthMethods;
using VaultSharp.V1.AuthMethods.GoogleCloud;
using VaultSharp.V1.Commons;
using Data = Google.Apis.Iam.v1.Data;
namespace Examples
{
public class GCPAuthExample
{
/// <summary>
/// Fetches a key-value secret (kv-v2) after authenticating to Vault via GCP IAM,
/// one of two auth methods used to authenticate with GCP (the other is GCE auth).
/// </summary>
public string GetSecretGcp()
{
var vaultAddr = Environment.GetEnvironmentVariable("VAULT_ADDR");
if(String.IsNullOrEmpty(vaultAddr))
{
throw new System.ArgumentNullException("Vault Address");
}
var roleName = Environment.GetEnvironmentVariable("VAULT_ROLE");
if(String.IsNullOrEmpty(roleName))
{
throw new System.ArgumentNullException("Vault Role Name");
}
// Learn about authenticating to GCS with service account credentials at https://cloud.google.com/docs/authentication/production
if(String.IsNullOrEmpty(Environment.GetEnvironmentVariable("GOOGLE_APPLICATION_CREDENTIALS")))
{
Console.WriteLine("WARNING: Environment variable GOOGLE_APPLICATION_CREDENTIALS was not set. IAM client for JWT signing will fall back to default instance credentials.");
}
var jwt = SignJWT();
IAuthMethodInfo authMethod = new GoogleCloudAuthMethodInfo(roleName, jwt);
var vaultClientSettings = new VaultClientSettings(vaultAddr, authMethod);
IVaultClient vaultClient = new VaultClient(vaultClientSettings);
// We can retrieve the secret after creating our VaultClient object
Secret<SecretData> kv2Secret = null;
kv2Secret = vaultClient.V1.Secrets.KeyValue.V2.ReadSecretAsync(path: "/creds").Result;
var password = kv2Secret.Data.Data["password"];
return password.ToString();
}
/// <summary>
/// Generate signed JWT from GCP IAM
/// </summary>
private string SignJWT()
{
var roleName = Environment.GetEnvironmentVariable("GCP_ROLE");
var svcAcctName = Environment.GetEnvironmentVariable("GCP_SERVICE_ACCOUNT_NAME");
var gcpProjName = Environment.GetEnvironmentVariable("GOOGLE_CLOUD_PROJECT");
IamService iamService = new IamService(new BaseClientService.Initializer
{
HttpClientInitializer = GetCredential(),
ApplicationName = "Google-iamSample/0.1",
});
string svcEmail = $"{svcAcctName}@{gcpProjName}.iam.gserviceaccount.com";
string name = $"projects/-/serviceAccounts/{svcEmail}";
TimeSpan currentTime = (DateTime.UtcNow - new DateTime(1970, 1, 1));
int expiration = (int)(currentTime.TotalSeconds) + 900;
Data.SignJwtRequest requestBody = new Data.SignJwtRequest();
requestBody.Payload = JsonConvert.SerializeObject(new Dictionary<string, object> ()
{
{ "aud", $"vault/{roleName}" } ,
{ "sub", svcEmail } ,
{ "exp", expiration }
});
ProjectsResource.ServiceAccountsResource.SignJwtRequest request = iamService.Projects.ServiceAccounts.SignJwt(requestBody, name);
Data.SignJwtResponse response = request.Execute();
return JsonConvert.SerializeObject(response.SignedJwt).Replace("\"", "");
}
public static GoogleCredential GetCredential()
{
GoogleCredential credential = Task.Run(() => GoogleCredential.GetApplicationDefaultAsync()).Result;
if (credential.IsCreateScopedRequired)
{
credential = credential.CreateScoped("https://www.googleapis.com/auth/cloud-platform");
}
return credential;
}
}
}
```
</CodeBlockConfig>
</CodeTabs> | vault | layout docs page title Google Cloud Auth Methods description The gcp auth method allows users and machines to authenticate to Vault using Google Cloud service accounts Google Cloud auth method The gcp auth method allows Google Cloud Platform entities to authenticate to Vault Vault treats Google Cloud as a trusted third party and verifies authenticating entities against the Google Cloud APIs This backend allows for authentication of Google Cloud IAM service accounts Google Compute Engine GCE instances This backend focuses on identities specific to Google Cloud and does not support authenticating arbitrary Google or Google Workspace users or generic OAuth against Google This plugin is developed in a separate GitHub repository at hashicorp vault plugin auth gcp repo but is automatically bundled in Vault releases Please file all feature requests bugs and pull requests specific to the GCP plugin under that repository Authentication Via the CLI helper Vault includes a CLI helper that obtains a signed JWT locally and sends the request to Vault shell session Authentication to vault outside of Google Cloud vault login method gcp role my role service account authenticating account my project iam gserviceaccount com jwt exp 15m credentials path to signer credentials json shell session Authentication to vault inside of Google Cloud vault login method gcp role my role For more usage information run vault auth help gcp Note The project parameter has been removed in Vault 1 5 9 1 6 5 and 1 7 2 It is no longer needed for configuration and will be ignored if provided Via the CLI shell session vault write field token auth gcp login role my role jwt eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9 See Generating JWTs generating jwts for ways to obtain the JWT token Via the API shell session curl request POST data role my role jwt eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9 http 127 0 0 1 8200 v1 auth gcp login See API docs api docs for expected response Configuration Auth methods must be configured in advance before users or machines can authenticate These steps are usually completed by an operator or configuration management tool 1 Enable the Google Cloud auth method shell session vault auth enable gcp 1 Configure the auth method credentials if Vault is not running on Google Cloud text vault write auth gcp config credentials path to credentials json If you are using instance credentials or want to specify credentials via an environment variable you can skip this step To learn more see the Google Cloud Credentials gcp credentials section below Note If you re using a Private Google Access https cloud google com vpc docs configure private google access environment you will additionally need to configure your environment s custom endpoints via the custom endpoint vault api docs auth gcp custom endpoint configuration parameter In some cases you cannot set sensitive IAM security credentials in your Vault configuration For example your organization may require that all security credentials are short lived or explicitly tied to a machine identity To provide IAM security credentials to Vault we recommend using Vault plugin workload identity federation plugin workload identity federation wif WIF as shown below 1 Alternatively configure the audience claim value and the service account email to assume for plugin workload identity federation text vault write auth gcp config identity token audience TOKEN AUDIENCE service account email SERVICE ACCOUNT EMAIL Vault s identity token provider signs the plugin identity token JWT internally If a trust relationship exists between Vault and GCP through WIF the auth method can exchange the Vault identity token for a federated access token https cloud google com docs authentication token types access To configure a trusted relationship between Vault and GCP You must configure the identity token issuer backend vault api docs secret identity tokens configure the identity tokens backend for Vault GCP must have a workload identity pool and provider https cloud google com iam docs manage workload identity pools providers configured with information about the fully qualified and network reachable issuer URL for the Vault plugin s identity token provider vault api docs secret identity tokens read plugin identity well known configurations Establishing a trusted relationship between Vault and GCP ensures that GCP can fetch JWKS public keys vault api docs secret identity tokens read active public keys and verify the plugin identity token signature 1 Create a named role For an iam type role shell session vault write auth gcp role my iam role type iam policies dev prod bound service accounts my service my project iam gserviceaccount com For a gce type role shell session vault write auth gcp role my gce role type gce policies dev prod bound projects my project1 my project2 bound zones us east1 b bound labels foo bar zip zap bound service accounts my service my project iam gserviceaccount com Note that bound service accounts is only required for iam type roles For the complete list of configuration options for each type please see the API documentation api docs GCP credentials The Google Cloud Vault auth method uses the official Google Cloud Golang SDK This means it supports the common ways of providing credentials to Google Cloud cloud creds 1 The environment variable GOOGLE APPLICATION CREDENTIALS This is specified as the path to a Google Cloud credentials file typically for a service account If this environment variable is present the resulting credentials are used If the credentials are invalid an error is returned 1 Default instance credentials When no environment variable is present the default service account credentials are used For more information on service accounts please see the Google Cloud Service Accounts documentation service accounts To use this auth method the service account must have the following minimum scope text https www googleapis com auth cloud platform Required GCP permissions Enabled GCP APIs The GCP project must have the following APIs enabled iam googleapis com https console cloud google com flows enableapi apiid iam googleapis com for iam and gce type roles compute googleapis com https console cloud google com flows enableapi apiid compute googleapis com for gce type roles cloudresourcemanager googleapis com https console cloud google com flows enableapi apiid cloudresourcemanager googleapis com for iam and gce type roles that set add group aliases vault api docs auth gcp add group aliases to true Vault server permissions For iam type Vault roles the service account credentials vault api docs auth gcp credentials given to Vault can have the following role text roles iam serviceAccountKeyAdmin For gce type Vault roles the service account credentials vault api docs auth gcp credentials given to Vault can have the following role text roles compute viewer If you instead wish to create a custom role with only the exact GCP permissions required use the following list of permissions text iam serviceAccounts get iam serviceAccountKeys get compute instances get compute instanceGroups list These allow Vault to verify the service account either directly authenticating or associated with authenticating GCE instance exists get the corresponding public keys for verifying JWTs signed by service account private keys verify authenticating GCE instances exist compare bound fields for GCE roles zone region labels or membership in given instance groups If you are using Group Aliases as described below you will also need to add the resourcemanager projects get permission Permissions for authenticating against Vault If you are authenticating to Vault from Google Cloud you can skip the following step as Vault will generate and present the identity token of the service account configured on the instance or the pod Note that the previously mentioned permissions are given to the Vault servers The IAM service account or GCE instance that is authenticating against Vault must have the following role text roles iam serviceAccountTokenCreator WARNING Make sure this role is only applied so your service account can impersonate itself If this role is applied GCP project wide this will allow the service account to impersonate any service account in the GCP project where it resides See Managing service account impersonation https cloud google com iam docs impersonating service accounts for more information Plugin Workload Identity Federation WIF EnterpriseAlert product vault The GCP auth method supports the plugin WIF workflow and has a source of identity called a plugin identity token A plugin identity token is a JWT that is signed internally by the Vault plugin identity token issuer vault api docs secret identity tokens read plugin workload identity issuer s openid configuration If there is a trust relationship configured between Vault and GCP through workload identity federation https cloud google com iam docs workload identity federation the auth method can exchange its identity token for short lived access tokens needed to perform its actions Exchanging identity tokens for access tokens lets the GCP auth method operate without configuring explicit access to sensitive IAM security credentials To configure the auth method to use plugin WIF 1 Ensure that Vault openid configuration vault api docs secret identity tokens read plugin identity token issuer s openid configuration and public JWKS vault api docs secret identity tokens read plugin identity token issuer s public jwks APIs are network reachable by GCP We recommend using an API proxy or gateway if you need to limit Vault API exposure 1 Create a workload identity pool and provider https cloud google com iam docs workload identity federation with other providers create pool provider in GCP 1 The provider URL must point at your Vault plugin identity token issuer vault api docs secret identity tokens read plugin workload identity issuer s openid configuration with the well known openid configuration suffix removed For example https host port v1 identity oidc plugins 1 Uniquely identify the recipient of the plugin identity token as the audience You can use the default audience https cloud google com iam docs workload identity federation with other providers prepare for the identity pool or a custom value less than 256 characters 1 Authenticate a workload https cloud google com iam docs workload identity federation with other providers authenticate in GCP by granting the identity pool access to a dedicated service account using service account impersonation Filter requests using the unique sub claim issued by plugin identity tokens so the GCP Auth method can impersonate the service account sub claims have the form plugin identity NAMESPACE auth GCP AUTH MOUNT ACCESSOR 1 Configure the GCP auth method with the OIDC audience value and service account email shell session vault write auth gcp config identity token audience iam googleapis com projects 410449834127 locations global workloadIdentityPools vault gcp auth 43777a63 providers vault gcp auth wif provider service account email vault plugin wif auth hc b712f250b4e04cacbadd258a90b iam gserviceaccount com Your auth method can now use plugin WIF for its configuration credentials By default WIF credentials https cloud google com iam docs workload identity federation access management have a time to live of 1 hour and automatically refresh when they expire Please see the API documentation vault api docs auth gcp configure for more details on the fields associated with plugin WIF Group aliases As of Vault 1 0 roles can specify an add group aliases boolean parameter that adds group aliases identity group aliases to the auth response These aliases can aid in building reusable policies since they are available as interpolated values in Vault s policy engine Once enabled the auth response will include the following aliases json project PROJECT ID folder SUBFOLDER ID folder FOLDER ID organization ORG ID If you are using a custom role for Vault server you will need to add the resourcemanager projects get permission to your custom role Implementation details This section describes the implementation details for how Vault communicates with Google Cloud to authenticate and authorize JWT tokens This information is provided for those who are curious but these details are not required knowledge for using the auth method IAM login IAM login applies only to roles of type iam The Vault authentication workflow for IAM service accounts looks like this Vault Google Cloud IAM Login Workflow img vault gcp iam auth workflow svg img vault gcp iam auth workflow svg 1 The client generates a signed JWT using the Service Account Credentials projects serviceAccounts signJwt signjwt method API method For examples of how to do this see the Generating JWTs generating jwts section 2 The client sends this signed JWT to Vault along with a role name 3 Vault extracts the kid header value which contains the ID of the key pair used to generate the JWT and the sub ID email to find the service account key If the service account does not exist or the key is not linked to the service account Vault denies authentication 4 Vault authorizes the confirmed service account against the given role If that is successful a Vault token with the proper policies is returned GCE login GCE login only applies to roles of type gce and must be completed on an infrastructure running on Google Cloud These steps will not work from your local laptop or another cloud provider Vault Google Cloud GCE Login Workflow img vault gcp gce auth workflow svg img vault gcp gce auth workflow svg 1 The client obtains an instance identity metadata token instance identity on a GCE instance 2 The client sends this JWT to Vault along with a role name 3 Vault extracts the kid header value which contains the ID of the key pair used to generate the JWT to find the OAuth2 public cert to verify this JWT 4 Vault authorizes the confirmed instance against the given role ensuring the instance matches the bound zones regions or instance groups If that is successful a Vault token with the proper policies is returned Generating JWTs This section details the various methods and examples for obtaining JWT tokens Service account credentials API This describes how to use the GCP Service Account Credentials API method signjwt method directly to generate the signed JWT with the claims that Vault expects Note the CLI does this process for you and is much easier and that there is very little reason to do this yourself curl example Vault requires the following minimum claim set json sub SERVICE ACCOUNT EMAIL OR ID aud vault ROLE exp EXPIRATION For the API method providing the expiration claim exp is required If it is omitted it will not be added automatically and Vault will deny authentication Expiration must be specified as a NumericDate https tools ietf org html rfc7519 section 2 value seconds from Epoch This value must be before the max JWT expiration allowed for a role This defaults to 15 minutes and cannot be more than 1 hour If a user generates a token that expires after 15 minutes and the gcp role has max jwt exp set to the default Vault will return the following error Expiration date must be set to no more that 15 mins in JWT CLAIM otherwise the login request returns error role requires that service account JWTs expire within 900 seconds In this case the user must create a new signed JWT with a shorter expiration or set max jwt exp to a higher value in the gcp role One you have all this information the JWT token can be signed using curl and oauth2l https github com google oauth2l shell session ROLE my role SERVICE ACCOUNT service account my project iam gserviceaccount com OAUTH TOKEN oauth2l header cloud platform EXPIRATION your token expiration JWT CLAIM aud vault ROLE sub SERVICE ACCOUNT exp EXPIRATION curl header OAUTH TOKEN header Content Type application json request POST data payload JWT CLAIM https iamcredentials googleapis com v1 projects serviceAccounts SERVICE ACCOUNT signJwt gcloud example You can also do this through the currently beta gcloud command Note that you will be required to provide the expiration claim exp as a part of the JWT input to the command shell session gcloud beta iam service accounts sign jwt INPUT JWT CLAIMS OUTPUT JWT FILE iam account service account my project iam gserviceaccount com project my project Golang example Read more on the Google Open Source blog https opensource googleblog com 2017 08 hashicorp vault and google cloud iam html GCE You can autogenerate this token in Vault versions 1 8 2 or higher GCE tokens can only be generated from a GCE instance 1 Vault can automatically discover the identity token on a GCE GKE instance This simplifies authenticating to Vault like so shell session vault login method gcp role my gce role 1 The JWT token can also be obtained from the service accounts default identity endpoint for a instance s metadata server Curl example shell session ROLE my gce role curl header Metadata Flavor Google get data urlencode audience http vault ROLE data urlencode format full http metadata computeMetadata v1 instance service accounts default identity API The GCP Auth Plugin has a full HTTP API Please see the API docs api docs for more details jwt https tools ietf org html rfc7519 signjwt method https cloud google com iam docs reference credentials rest v1 projects serviceAccounts signJwt cloud creds https cloud google com docs authentication production providing credentials to your application service accounts https cloud google com compute docs access service accounts api docs vault api docs auth gcp identity group aliases vault api docs secret identity group alias instance identity https cloud google com compute docs instances verifying instance identity repo https github com hashicorp vault plugin auth gcp Code example The following example demonstrates the Google Cloud auth method to authenticate with Vault CodeTabs CodeBlockConfig go package main import context fmt os vault github com hashicorp vault api auth github com hashicorp vault api auth gcp Fetches a key value secret kv v2 after authenticating to Vault via GCP IAM one of two auth methods used to authenticate with GCP the other is GCE auth func getSecretWithGCPAuthIAM string error config vault DefaultConfig modify for more granular configuration client err vault NewClient config if err nil return fmt Errorf unable to initialize Vault client w err For IAM style auth the environment variable GOOGLE APPLICATION CREDENTIALS must be set with the path to a valid credentials JSON file otherwise Vault will fall back to Google s default instance credentials Learn about authenticating to GCS with service account credentials at https cloud google com docs authentication production if pathToCreds os Getenv GOOGLE APPLICATION CREDENTIALS pathToCreds fmt Printf WARNING Environment variable GOOGLE APPLICATION CREDENTIALS was not set IAM client for JWT signing and Vault server IAM client will both fall back to default instance credentials n svcAccountEmail fmt Sprintf s s iam gserviceaccount com os Getenv GCP SERVICE ACCOUNT NAME os Getenv GOOGLE CLOUD PROJECT We pass the auth WithIAMAuth option to use the IAM style authentication of the GCP auth backend Otherwise we default to using GCE style authentication which gets its credentials from the metadata server gcpAuth err auth NewGCPAuth dev role iam auth WithIAMAuth svcAccountEmail if err nil return fmt Errorf unable to initialize GCP auth method w err authInfo err client Auth Login context TODO gcpAuth if err nil return fmt Errorf unable to login to GCP auth method w err if authInfo nil return fmt Errorf login response did not return client token get secret from the default mount path for KV v2 in dev mode secret secret err client KVv2 secret Get context Background creds if err nil return fmt Errorf unable to read secret w err data map can contain more than one key value pair in this case we re just grabbing one of them value ok secret Data password string if ok return fmt Errorf value type assertion failed T v secret Data password secret Data password return value nil CodeBlockConfig CodeBlockConfig cs using System using System Collections Generic using System IO using System Threading Tasks using Google Apis Auth OAuth2 using Google Apis Services using Google Apis Iam v1 using Newtonsoft Json using VaultSharp using VaultSharp V1 AuthMethods using VaultSharp V1 AuthMethods GoogleCloud using VaultSharp V1 Commons using Data Google Apis Iam v1 Data namespace Examples public class GCPAuthExample summary Fetches a key value secret kv v2 after authenticating to Vault via GCP IAM one of two auth methods used to authenticate with GCP the other is GCE auth summary public string GetSecretGcp var vaultAddr Environment GetEnvironmentVariable VAULT ADDR if String IsNullOrEmpty vaultAddr throw new System ArgumentNullException Vault Address var roleName Environment GetEnvironmentVariable VAULT ROLE if String IsNullOrEmpty roleName throw new System ArgumentNullException Vault Role Name Learn about authenticating to GCS with service account credentials at https cloud google com docs authentication production if String IsNullOrEmpty Environment GetEnvironmentVariable GOOGLE APPLICATION CREDENTIALS Console WriteLine WARNING Environment variable GOOGLE APPLICATION CREDENTIALS was not set IAM client for JWT signing will fall back to default instance credentials var jwt SignJWT IAuthMethodInfo authMethod new GoogleCloudAuthMethodInfo roleName jwt var vaultClientSettings new VaultClientSettings vaultAddr authMethod IVaultClient vaultClient new VaultClient vaultClientSettings We can retrieve the secret after creating our VaultClient object Secret SecretData kv2Secret null kv2Secret vaultClient V1 Secrets KeyValue V2 ReadSecretAsync path creds Result var password kv2Secret Data Data password return password ToString summary Generate signed JWT from GCP IAM summary private string SignJWT var roleName Environment GetEnvironmentVariable GCP ROLE var svcAcctName Environment GetEnvironmentVariable GCP SERVICE ACCOUNT NAME var gcpProjName Environment GetEnvironmentVariable GOOGLE CLOUD PROJECT IamService iamService new IamService new BaseClientService Initializer HttpClientInitializer GetCredential ApplicationName Google iamSample 0 1 string svcEmail svcAcctName gcpProjName iam gserviceaccount com string name projects serviceAccounts svcEmail TimeSpan currentTime DateTime UtcNow new DateTime 1970 1 1 int expiration int currentTime TotalSeconds 900 Data SignJwtRequest requestBody new Data SignJwtRequest requestBody Payload JsonConvert SerializeObject new Dictionary string object aud vault roleName sub svcEmail exp expiration ProjectsResource ServiceAccountsResource SignJwtRequest request iamService Projects ServiceAccounts SignJwt requestBody name Data SignJwtResponse response request Execute return JsonConvert SerializeObject response SignedJwt Replace public static GoogleCredential GetCredential GoogleCredential credential Task Run GoogleCredential GetApplicationDefaultAsync Result if credential IsCreateScopedRequired credential credential CreateScoped https www googleapis com auth cloud platform return credential CodeBlockConfig CodeTabs |
vault page title AWS Auth Methods AWS auth method layout docs include x509 sha1 deprecation mdx The aws auth method allows automated authentication of AWS entities | ---
layout: docs
page_title: AWS - Auth Methods
description: The aws auth method allows automated authentication of AWS entities.
---
# AWS auth method
@include 'x509-sha1-deprecation.mdx'
@include 'aws-sha1-deprecation.mdx'
The `aws` auth method provides an automated mechanism to retrieve a Vault token
for IAM principals and AWS EC2 instances. Unlike most Vault auth methods, this
method does not require manual first-deploying, or provisioning
security-sensitive credentials (tokens, username/password, client certificates,
etc), by operators under many circumstances.
## Authentication workflow
There are two authentication types present in the aws auth method: `iam` and
`ec2`.
With the `iam` method, a special AWS request signed with AWS IAM credentials is
used for authentication. The IAM credentials are automatically supplied to AWS
instances in IAM instance profiles, Lambda functions, and others, and it is
this information already provided by AWS which Vault can use to authenticate
clients.
With the `ec2` method, AWS is treated as a Trusted Third Party and
cryptographically signed dynamic metadata information that uniquely represents
each EC2 instance is used for authentication. This metadata information is
automatically supplied by AWS to all EC2 instances.
Based on how you attempt to authenticate, Vault will determine if you are
attempting to use the `iam` or `ec2` type. Each has a different authentication
workflow, and each can solve different use cases.
Note: The `ec2` method was implemented before the primitives to implement the
`iam` method were supported by AWS. The `iam` method is the recommended approach
as it is more flexible and aligns with best practices to perform access
control and authentication. See the section on comparing the two auth methods
below for more information.
-> **Usage:** See the [Authentication](#authentication) section for Vault CLI
and API usage examples. The [Code Example](#code-example) section provides a
code snippet demonstrating the authentication with Vault using the AWS auth
method.
### IAM auth method
The AWS STS API includes a method, [`sts:GetCallerIdentity`](http://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html), which allows you to validate the identity of a client. The client signs a `GetCallerIdentity` query using the [AWS Signature v4 algorithm](http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html) and sends it to the Vault server. The credentials used to sign the GetCallerIdentity request can come from the EC2 instance metadata service for an EC2 instance, or from the AWS environment variables in an AWS Lambda function execution, which obviates the need for an operator to manually provision some sort of identity material first. However, the credentials can, in principle, come from anywhere, not just from the locations AWS has provided for you.
The `GetCallerIdentity` query consists of four pieces of information: the request URL, the request body, the request headers, and the request method, as the AWS signature is computed over those fields. The Vault server reconstructs the query using this information and forwards it on to the AWS STS service. Depending on the response from the STS service, the server authenticates the client.
Notably, clients don't need network-level access themselves to talk
to the AWS STS API endpoint; they merely need access to the credentials to sign
the request. However, it means that the Vault server does need network-level
access to send requests to the STS endpoint.
Each signed AWS request includes the current timestamp to mitigate the risk of
replay attacks. In addition, Vault allows you to require an additional header,
`X-Vault-AWS-IAM-Server-ID`, to be present to mitigate against different types
of replay attacks (such as a signed `GetCallerIdentity` request stolen from a
dev Vault instance and used to authenticate to a prod Vault instance). Vault
further requires that this header be one of the headers included in the AWS
signature and relies upon AWS to authenticate that signature.
While AWS API endpoints support both signed GET and POST requests, for
simplicity, the aws auth method supports only POST requests. It also does not
support `presigned` requests, i.e., requests with `X-Amz-Credential`,
`X-Amz-Signature`, and `X-Amz-SignedHeaders` GET query parameters containing
the authenticating information.
It's also important to note that Amazon does NOT appear to include any sort of
authorization around calls to `GetCallerIdentity`. For example, if you have an
IAM policy on your credential that requires all access to be MFA authenticated,
non-MFA authenticated credentials (i.e., raw credentials, not those retrieved
by calling `GetSessionToken` and supplying an MFA code) will still be able to
authenticate to Vault using this method. It does not appear possible to enforce
an IAM principal to be MFA authenticated while authenticating to Vault.
### EC2 auth method
Amazon EC2 instances have access to metadata which describes the instance. The
Vault EC2 auth method leverages the components of this metadata to authenticate
and distribute an initial Vault token to an EC2 instance. The data flow (which
is also represented in the graphic below) is as follows:
[](/img/vault-aws-ec2-auth-flow.png)
1. An AWS EC2 instance fetches its [AWS Instance Identity Document][aws-iid]
from the [EC2 Metadata Service][aws-ec2-mds]. In addition to data itself, AWS
also provides the PKCS#7 signature of the data, and publishes the public keys
(by region) which can be used to verify the signature.
1. The AWS EC2 instance makes a request to Vault with the PKCS#7 signature.
The PKCS#7 signature contains the Instance Identity Document.
1. Vault verifies the signature on the PKCS#7 document, ensuring the information
is certified accurate by AWS. This process validates both the validity and
integrity of the document data. As an added security measure, Vault verifies
that the instance is currently running using the public EC2 API endpoint.
1. Provided all steps are successful, Vault returns the initial Vault token to
the EC2 instance. This token is mapped to any configured policies based on the
instance metadata.
[aws-iid]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-documents.html
[aws-ec2-mds]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
There are various modifications to this workflow that provide more or less
security, as detailed later in this documentation.
## Authorization workflow
The basic mechanism of operation is per-role. Roles are registered in the
method and associated with a specific authentication type that cannot be
changed once the role has been created. Roles can also be associated with
various optional restrictions, such as the set of allowed policies and max TTLs
on the generated tokens. Each role can be specified with the constraints that
are to be met during the login. Many of these constraints accept lists of
required values. For any constraint which accepts a list of values, that
constraint will be considered satisfied if any one of the values is matched
during the login process. For example, one such constraint that is
supported is to bind against a list of AMI IDs. A role which is bound to a
specific list of AMIs can only be used for login by EC2 instances that are
deployed to one of the AMIs that the role is bound to.
The iam auth method allows you to specify bound IAM principal ARNs.
Clients authenticating to Vault must have an ARN that matches one of the ARNs bound to
the role they are attempting to login to. The bound ARN allows specifying a
wildcard at the end of the bound ARN. For example, if the bound ARN were
`arn:aws:iam::123456789012:*` it would allow any principal in AWS account
123456789012 to login to it. Similarly, if it were
`arn:aws:iam::123456789012:role/*` it would allow any IAM role in the AWS
account to login to it. If you wish to specify a wildcard, you must give Vault
`iam:GetUser` and `iam:GetRole` permissions to properly resolve the full user
path.
In general, role bindings that are specific to an EC2 instance are only checked
when the ec2 auth method is used to login, while bindings specific to IAM
principals are only checked when the iam auth method is used to login. However,
the iam method includes the ability for you to "infer" an EC2 instance ID from
the authenticated client and apply many of the bindings that would otherwise
only apply specifically to EC2 instances.
In many cases, an organization will use a "seed AMI" that is specialized after
bootup by configuration management or similar processes. For this reason, a
role entry in the method can also be associated with a "role tag" when using
the ec2 auth type. These tags
are generated by the method and are placed as the value of a tag with the
given key on the EC2 instance. The role tag can be used to further restrict the
parameters set on the role, but cannot be used to grant additional privileges.
If a role with an AMI bind constraint has "role tag" enabled on the role, and
the EC2 instance performing login does not have an expected tag on it, or if the
tag on the instance is deleted for some reason, authentication fails.
The role tags can be generated at will by an operator with appropriate API
access. They are HMAC-signed by a per-role key stored within the method, allowing
the method to verify the authenticity of a found role tag and ensure that it has
not been tampered with. There is also a mechanism to deny list role tags if one
has been found to be distributed outside of its intended set of machines.
## IAM authentication inferences
With the iam auth method, normally Vault will see the IAM principal that
authenticated, either the IAM user or role. However, when you have an EC2
instance in an IAM instance profile, Vault can actually see the instance ID of
the instance and can "infer" that it's an EC2 instance. However, there are
important security caveats to be aware of before configuring Vault to make that
inference.
Each AWS IAM role has a "trust policy" which specifies which entities are
trusted to call
[`sts:AssumeRole`](http://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html)
on the role and retrieve credentials that can be used to authenticate with that
role. When AssumeRole is called, a parameter called RoleSessionName is passed
in, which is chosen arbitrarily by the entity which calls AssumeRole. If you
have a role with an ARN `arn:aws:iam::123456789012:role/MyRole`, then the
credentials returned by calling AssumeRole on that role will be
`arn:aws:sts::123456789012:assumed-role/MyRole/RoleSessionName` where
RoleSessionName is the session name in the AssumeRole API call. It is this
latter value which Vault actually sees.
When you have an EC2 instance in an instance profile, the corresponding role's
trust policy specifies that the principal `"Service": "ec2.amazonaws.com"` is
trusted to call AssumeRole. When this is configured, EC2 calls AssumeRole on
behalf of your instance, with a RoleSessionName corresponding to the
instance's instance ID. Thus, it is possible for Vault to extract the instance
ID out of the value it sees when an EC2 instance in an instance profile
authenticates to Vault with the iam auth method. This is known as
"inferencing." Vault can be configured, on a role-by-role basis, to infer that a
caller is an EC2 instance and, if so, apply further bindings that apply
specifically to EC2 instances -- most of the bindings available to the ec2
auth method.
However, it is very important to note that if any entity other than an AWS
service is permitted to call AssumeRole on your role, then that entity can
simply pass in your instance's instance ID and spoof your instance to Vault.
This also means that anybody who is able to modify your role's trust policy
(e.g., via
[`iam:UpdateAssumeRolePolicy`](http://docs.aws.amazon.com/IAM/latest/APIReference/API_UpdateAssumeRolePolicy.html),
then that person could also spoof your instances. If this is a concern but you
would like to take advantage of inferencing, then you should tightly restrict
who is able to call AssumeRole on the role, tightly restrict who is able to call
UpdateAssumeRolePolicy on the role, and monitor CloudTrail logs for calls to
AssumeRole and UpdateAssumeRolePolicy. All of these caveats apply equally to
using the iam auth method without inferencing; the point is merely
that Vault cannot offer an iron-clad guarantee about the inference and it is up
to operators to determine, based on their own AWS controls and use cases,
whether or not it's appropriate to configure inferencing.
## Mixing authentication types
Vault allows you to configure using either the ec2 auth method or the iam auth
method, but not both auth methods. Further, **assumed roles are not supported**
and Vault prevents you from enforcing restrictions that it cannot enforce given
the chosen auth type for a role. Some examples of how this works in practice:
1. You configure a role with the ec2 auth type, with a bound AMI ID. A
client would not be able to login using the iam auth type.
2. You configure a role with the iam auth type, with a bound IAM
principal ARN. A client would not be able to login with the ec2 auth method.
3. You configure a role with the iam auth type and further configure
inferencing. You have a bound AMI ID and a bound IAM principal ARN. A client
must login using the iam method; the RoleSessionName must be a valid instance
ID viewable by Vault, and the instance must have come from the bound AMI ID.
## Comparison of the IAM and EC2 methods
The iam and ec2 auth methods serve similar and somewhat overlapping
functionality, in that both authenticate some type of AWS entity to Vault.
Here are some comparisons that illustrate why `iam` method is preferred over
`ec2`.
- What type of entity is authenticated:
- The ec2 auth method authenticates only AWS EC2 instances and is specialized
to handle EC2 instances, such as restricting access to EC2 instances from
a particular AMI, EC2 instances in a particular instance profile, or EC2
instances with a specialized tag value (via the role_tag feature).
- The iam auth method authenticates AWS IAM principals. This can
include IAM users, IAM roles assumed from other accounts, AWS Lambdas that
are launched in an IAM role, or even EC2 instances that are launched in an
IAM instance profile. However, because it authenticates more generalized IAM
principals, this method doesn't offer more granular controls beyond binding
to a given IAM principal without the use of inferencing.
- How the entities are authenticated
- The ec2 auth method authenticates instances by making use of the EC2
instance identity document, which is a cryptographically signed document
containing metadata about the instance. This document changes relatively
infrequently, so Vault adds a number of other constructs to mitigate against
replay attacks, such as client nonces, role tags, instance migrations, etc.
Because the instance identity document is signed by AWS, you have a strong
guarantee that it came from an EC2 instance.
- The iam auth method authenticates by having clients provide a specially
signed AWS API request which the method then passes on to AWS to validate
the signature and tell Vault who created it. The actual secret (i.e.,
the AWS secret access key) is never transmitted over the wire, and the
AWS signature algorithm automatically expires requests after 15 minutes,
providing simple and robust protection against replay attacks. The use of
inferencing, however, provides a weaker guarantee that the credentials came
from an EC2 instance in an IAM instance profile compared to the ec2
authentication mechanism.
- The instance identity document used in the ec2 auth method is more likely to
be stolen given its relatively static nature, but it's harder to spoof. On
the other hand, the credentials of an EC2 instance in an IAM instance
profile are less likely to be stolen given their dynamic and short-lived
nature, but it's easier to spoof credentials that might have come from an
EC2 instance.
- Specific use cases
- If you have non-EC2 instance entities, such as IAM users, Lambdas in IAM
roles, or developer laptops using [AdRoll's
Hologram](https://github.com/AdRoll/hologram) then you would need to use the
iam auth method.
- If you have EC2 instances, then you could use either auth method. If you
need more granular filtering beyond just the instance profile of given EC2
instances (such as filtering based off the AMI the instance was launched
from), then you would need to use the ec2 auth method, change the instance
profile associated with your EC2 instances so they have unique IAM roles
for each different Vault role you would want them to authenticate
to, or make use of inferencing. If you need to make use of role tags, then
you will need to use the ec2 auth method.
## Recommended Vault IAM policy
This specifies the recommended IAM policy needed by the AWS auth method. Note
that if you are using the same credentials for the AWS auth and secret methods
(e.g., if you're running Vault on an EC2 instance in an IAM instance profile),
then you will need to add additional permissions as required by the AWS secret
method.
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"iam:GetInstanceProfile",
"iam:GetUser",
"iam:GetRole"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["sts:AssumeRole"],
"Resource": ["arn:aws:iam::<AccountId>:role/<VaultRole>"]
},
{
"Sid": "ManageOwnAccessKeys",
"Effect": "Allow",
"Action": [
"iam:CreateAccessKey",
"iam:DeleteAccessKey",
"iam:GetAccessKeyLastUsed",
"iam:GetUser",
"iam:ListAccessKeys",
"iam:UpdateAccessKey"
],
"Resource": "arn:aws:iam::*:user/${aws:username}"
}
]
}
```
Here are some of the scenarios in which Vault would need to use each of these
permissions. This isn't intended to be an exhaustive list of all the scenarios
in which Vault might make an AWS API call, but rather illustrative of why these
are needed.
- `ec2:DescribeInstances` is necessary when you are using the `ec2` auth method
or when you are inferring an `ec2_instance` entity type to validate that the
EC2 instance meets binding requirements of the role
- `iam:GetInstanceProfile` is used when you have a `bound_iam_role_arn` in the
`ec2` auth method. Vault needs to determine which IAM role is attached to the
instance profile.
- `iam:GetUser` and `iam:GetRole` are used when using the iam auth method and
binding to an IAM user or role principal to determine the [AWS IAM Unique Identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-unique-ids)
or when using a wildcard on the bound ARN to resolve the full ARN of the user
or role.
- The `sts:AssumeRole` stanza is necessary when you are using [Cross Account
Access](#cross-account-access). The `Resource`s specified should be a list of
all the roles for which you have configured cross-account access, and each of
those roles should have this IAM policy attached (except for the
`sts:AssumeRole` statement).
- The `ManageOwnAccessKeys` stanza is necessary when you have configured Vault
with static credentials, and wish to rotate these credentials with the
[Rotate Root Credentials](/vault/api-docs/auth/aws#rotate-root-credentials)
API call.
## Plugin Workload Identity Federation (WIF)
<EnterpriseAlert product="vault" />
The AWS auth engine supports the plugin WIF workflow and has a source of identity called
a plugin identity token. A plugin identity token is a JWT that is signed internally by the Vault's
[plugin identity token issuer](/vault/api-docs/secret/identity/tokens#read-plugin-workload-identity-issuer-s-openid-configuration).
If there is a trust relationship configured between Vault and AWS through
[workload identity federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html),
the auth engine can exchange its identity token for short-lived STS credentials needed to
perform its actions.
Exchanging identity tokens for STS credentials lets the AWS auth engine
operate without configuring explicit access to sensitive IAM security
credentials.
To configure the auth engine to use plugin WIF:
1. Ensure that Vault [openid-configuration](/vault/api-docs/secret/identity/tokens#read-plugin-identity-token-issuer-s-openid-configuration)
and [public JWKS](/vault/api-docs/secret/identity/tokens#read-plugin-identity-token-issuer-s-public-jwks)
APIs are network-reachable by AWS. We recommend using an API proxy or gateway
if you need to limit Vault API exposure.
1. Create an
[IAM OIDC identity provider](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html)
in AWS.
1. The provider URL **must** point at your [Vault plugin identity token issuer](/vault/api-docs/secret/identity/tokens#read-plugin-workload-identity-issuer-s-openid-configuration) with the
`/.well-known/openid-configuration` suffix removed. For example:
`https://host:port/v1/identity/oidc/plugins`.
1. Uniquely identify the recipient of the plugin identity token as the audience.
In AWS, the recipient is the identity provider. We recommend using
the `host:port/v1/identity/oidc/plugins` portion of the provider URL as your
recipient since it will be unique for each configured identity provider.
1. Create a [web identity role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp_oidc.html#idp_oidc_Create)
in AWS with the same audience used for your IAM OIDC identity provider.
1. Configure the AWS auth engine with the IAM OIDC audience value and web
identity role ARN.
```shell-session
$ vault write auth/aws/config/client \
identity_token_audience="vault.example/v1/identity/oidc/plugins" \
role_arn="arn:aws:iam::123456789123:role/example-web-identity-role"
```
Your auth engine can now use plugin WIF for its configuration credentials.
By default, WIF [credentials](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithWebIdentity.html)
have a time-to-live of 1 hour and automatically refresh when they expire.
Please see the [API documentation](/vault/api-docs/auth/aws#configure-client)
for more details on the fields associated with plugin WIF.
## Client nonce
Note: this only applies to the ec2 auth method.
If an unintended party gains access to the PKCS#7 signature of the identity
document (which by default is available to every process and user that gains
access to an EC2 instance), it can impersonate that instance and fetch a Vault
token. The method addresses this problem by using a Trust On First Use (TOFU)
mechanism that allows the first client to present the PKCS#7 signature of the
document to be authenticated and denying the rest. An important property of
this design is detection of unauthorized access: if an unintended party authenticates,
the intended client will be unable to authenticate and can raise an alert for
investigation.
During the first login, the method stores the instance ID that authenticated
in a `accesslist`. One method of operation of the method is to disallow any
authentication attempt for an instance ID contained in the access list, using the
`disallow_reauthentication` option on the role, meaning that an instance is
allowed to login only once. However, this has consequences for token rotation,
as it means that once a token has expired, subsequent authentication attempts
would fail. By default, reauthentication is enabled in this method, and can be
turned off using `disallow_reauthentication` parameter on the registered role.
In the default method of operation, the method will return a unique nonce
during the first authentication attempt, as part of auth `metadata`. Clients
should present this `nonce` for subsequent login attempts and it should match
the `nonce` cached at the identity-accesslist entry at the method. Since only
the original client knows the `nonce`, only the original client is allowed to
reauthenticate. (This is the reason that this is a accesslist rather than a
deny list; by default, it's keeping track of clients allowed to reauthenticate,
rather than those that are not.). Clients can choose to provide a `nonce` even
for the first login attempt, in which case the provided `nonce` will be tied to
the cached identity-accesslist entry. It is recommended to use a strong `nonce`
value in this case.
It is up to the client to behave correctly with respect to the nonce; if the
client stores the nonce on disk it can survive reboots, but could also give
access to other users or applications on the instance. It is also up to the
operator to ensure that client nonces are in fact unique; sharing nonces allows
a compromise of the nonce value to enable an attacker that gains access to any
EC2 instance to imitate the legitimate client on that instance. This is why
nonces can be disabled on the method side in favor of only a single
authentication per instance; in some cases, such as when using ASGs, instances
are immutable and single-boot anyways, and in conjunction with a high max TTL,
reauthentication may not be needed (and if it is, the instance can simply be
shut down and allow ASG to start a new one).
In both cases, entries can be removed from the accesslist by instance ID,
allowing reauthentication by a client if the nonce is lost (or not used) and an
operator approves the process.
One other point: if available by the OS/distribution being used with the EC2
instance, it is not a bad idea to firewall access to the signed PKCS#7 metadata
to ensure that it is accessible only to the matching user(s) that require
access.
The client nonce which is generated by the backend and which gets returned
along with the authentication response, will be audit logged in plaintext. If
this is undesired, clients can supply a custom nonce to the login endpoint
which will not be returned and hence will not be audit logged.
## Advanced options and caveats
### Dynamic management of policies via role tags
Note: This only applies to the ec2 auth method or the iam auth method when
inferencing is used.
If the instance is required to have customized set of policies based on the
role it plays, the `role_tag` option can be used to provide a tag to set on
instances, for a given role. When this option is set, during login, along with
verification of PKCS#7 signature and instance health, the method will query
for the value of a specific tag with the configured key that is attached to the
instance. The tag holds information that represents a _subset_ of privileges that
are set on the role and are used to further restrict the set of the role's
privileges for that particular instance.
A `role_tag` can be created using `auth/aws/role/<role>/tag` endpoint
and is immutable. The information present in the tag is SHA256 hashed and HMAC
protected. The per-role key to HMAC is only maintained in the method. This prevents
an adversarial operator from modifying the tag when setting it on the EC2 instance
in order to escalate privileges.
When 'role_tag' option is enabled on a role, the instances are required to have a
role tag. If the tag is not found on the EC2 instance, authentication will fail.
This is to ensure that privileges of an instance are never escalated for not
having the tag on it or for getting the tag removed. If the role tag creation does
not specify the policy component, the client will inherit the allowed policies set
on the role. If the role tag creation specifies the policy component but it contains
no policies, the token will contain only the `default` policy; by default, this policy
allows only manipulation (revocation, renewal, lookup) of the existing token, plus
access to its [cubbyhole](/vault/docs/secrets/cubbyhole).
This can be useful to allow instances access to a secure "scratch space" for
storing data (via the token's cubbyhole) but without granting any access to
other resources provided by or resident in Vault.
### Handling lost client nonces
Note: This only applies to the ec2 auth method.
If an EC2 instance loses its client nonce (due to a reboot, a stop/start of the
client, etc.), subsequent login attempts will not succeed. If the client nonce
is lost, normally the only option is to delete the entry corresponding to the
instance ID from the identity `accesslist` in the method. This can be done via
the `auth/aws/identity-accesslist/<instance_id>` endpoint. This allows a new
client nonce to be accepted by the method during the next login request.
Under certain circumstances there is another useful setting. When the instance
is placed onto a host upon creation, it is given a `pendingTime` value in the
instance identity document (documentation from AWS does not cover this option,
unfortunately). If an instance is stopped and started, the `pendingTime` value
is updated (this does not apply to reboots, however).
The method can take advantage of this via the `allow_instance_migration`
option, which is set per-role. When this option is enabled, if the client nonce
does not match the saved nonce, the `pendingTime` value in the instance
identity document will be checked; if it is newer than the stored `pendingTime`
value, the method assumes that the client was stopped/started and allows the
client to log in successfully, storing the new nonce as the valid nonce for
that client. This essentially re-starts the TOFU mechanism any time the
instance is stopped and started, so should be used with caution. Just like with
initial authentication, the legitimate client should have a way to alert (or an
alert should trigger based on its logs) if it is denied authentication.
Unfortunately, the `allow_instance_migration` only helps during stop/start
actions; the current metadata does not provide for a way to allow this
automatic behavior during reboots. The method will be updated if this needed
metadata becomes available.
The `allow_instance_migration` option is set per-role, and can also be
specified in a role tag. Since role tags can only restrict behavior, if the
option is set to `false` on the role, a value of `true` in the role tag takes
effect; however, if the option is set to `true` on the role, a value set in the
role tag has no effect.
### Disabling reauthentication
Note: this only applies to the ec2 auth method.
If in a given organization's architecture, a client fetches a long-lived Vault
token and has no need to rotate the token, all future logins for that instance
ID can be disabled. If the option `disallow_reauthentication` is set, only one
login will be allowed per instance. If the intended client successfully
retrieves a token during login, it can be sure that its token will not be
hijacked by another entity.
When `disallow_reauthentication` option is enabled, the client can choose not
to supply a nonce during login, although it is not an error to do so (the nonce
is simply ignored). Note that reauthentication is enabled by default. If only
a single login is desired, `disallow_reauthentication` should be set explicitly
on the role or on the role tag.
The `disallow_reauthentication` option is set per-role, and can also be
specified in a role tag. Since role tags can only restrict behavior, if the
option is set to `false` on the role, a value of `true` in the role tag takes
effect; however, if the option is set to `true` on the role, a value set in the
role tag has no effect.
### Deny listing role tags
Note: this only applies to the ec2 auth method or the iam auth method
when inferencing is used.
Role tags are tied to a specific role, but the method has no control over, which
instances using that role, should have any particular role tag; that is purely up
to the operator. Although role tags are only restrictive (a tag cannot escalate
privileges above what is set on its role), if a role tag is found to have been
used incorrectly, and the administrator wants to ensure that the role tag has no
further effect, the role tag can be placed on a `deny list` via the endpoint
`auth/aws/roletag-denylist/<role_tag>`. Note that this will not invalidate the
tokens that were already issued; this only blocks any further login requests from
those instances that have the deny listed tag attached to them.
### Expiration times and tidying of `denylist` and `accesslist` entries
The expired entries in both identity `accesslist` and role tag `denylist` are
deleted automatically. The entries in both of these lists contain an expiration
time which is dynamically determined by three factors: `max_ttl` set on the role,
`max_ttl` set on the role tag, and `max_ttl` value of the method mount. The
least of these three dictates the maximum TTL of the issued token, and
correspondingly will be set as the expiration times of these entries.
The endpoints `auth/aws/tidy/identity-accesslist` and `auth/aws/tidy/roletag-denylist` are
provided to clean up the entries present in these lists. These endpoints allow
defining a safety buffer, such that an entry must not only be expired, but be
past expiration by the amount of time dictated by the safety buffer in order
to actually remove the entry.
Automatic deletion of expired entries is performed by the periodic function
of the method. This function does the tidying of both access list role tags
and access list identities. Periodic tidying is activated by default and will
have a safety buffer of 72 hours, meaning only those entries are deleted which
were expired before 72 hours from when the tidy operation is being performed.
This can be configured via `config/tidy/roletag-denylist` and `config/tidy/identity-accesslist`
endpoints.
### Varying public certificates
Note: this only applies to the ec2 auth method.
The AWS public certificate, which contains the public key used to verify the
PKCS#7 signature, varies for different AWS regions. The primary AWS public
certificate, which covers most AWS regions, is already included in Vault and
does not need to be added. Instances whose PKCS#7 signatures cannot be
verified by the default public certificate included in Vault can register a
different public certificate which can be found [here](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-identity-documents.html),
via the `auth/aws/config/certificate/<cert_name>` endpoint.
### Dangling tokens
An EC2 instance, after authenticating itself with the method, gets a Vault token.
After that, if the instance terminates or goes down for any reason, the method
will not be aware of such events. The token issued will still be valid, until
it expires. The token will likely be expired sooner than its lifetime when the
instance fails to renew the token on time.
### Cross account access
To allow Vault to authenticate IAM principals and EC2 instances in other
accounts, Vault supports using AWS STS (Security Token Service) to assume AWS
IAM Roles in other accounts. For each target AWS account ID, you configure the
IAM Role for Vault to assume using the `auth/aws/config/sts/<account_id>` and
Vault will use credentials from assuming that role to validate IAM principals
and EC2 instances in the target account.
The account in which Vault is running (i.e. the master account) must be listed as
a trusted entity in the IAM Role being assumed on the remote account. The Role itself
should allow the permissions specified in the [Recommended Vault IAM
Policy](#recommended-vault-iam-policy) except it doesn't need any further
`sts:AssumeRole` permissions.
Furthermore, in the master account, Vault must be granted the action `sts:AssumeRole`
for the IAM Role to be assumed.
### AWS instance metadata timeout
@include 'aws-imds-timeout.mdx'
## Authentication
### Via the CLI
#### Enable AWS EC2 authentication in Vault.
```shell-session
$ vault auth enable aws
```
#### Configure the credentials required to make AWS API calls
If not specified, Vault will attempt to use standard environment variables
(`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`) or IAM EC2 instance role
credentials if available.
The IAM account or role to which the credentials map must allow the
`ec2:DescribeInstances` action. In addition, if IAM Role binding is used (see
`bound_iam_role_arn` below), `iam:GetInstanceProfile` must also be allowed.
To provide IAM security credentials to Vault, we recommend using Vault
[plugin workload identity federation](#plugin-workload-identity-federation-wif)
(WIF).
```shell-session
$ vault write auth/aws/config/client \
secret_key=vCtSM8ZUEQ3mOFVlYPBQkf2sO6F/W7a5TVzrl3Oj \
access_key=VKIAJBRHKH6EVTTNXDHA
$ vault auth/aws/config/client \
identity_token_audience="vault.example/v1/identity/oidc/plugins" \
role_arn="arn:aws:iam::123456789123:role/example-web-identity-role"
```
#### Configure the policies on the role.
```shell-session
$ vault write auth/aws/role/dev-role auth_type=ec2 bound_ami_id=ami-fce3c696 policies=prod,dev max_ttl=500h
$ vault write auth/aws/role/dev-role-iam auth_type=iam \
bound_iam_principal_arn=arn:aws:iam::123456789012:role/MyRole policies=prod,dev max_ttl=500h
```
#### Configure a required X-Vault-AWS-IAM-Server-ID header (recommended)
```shell-session
$ vault write auth/aws/config/client iam_server_id_header_value=vault.example.com
```
#### Perform the login operation
For the ec2 auth method, first fetch the PKCS#7 signature on the AWS instance:
```shell-session
$ SIGNATURE=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/rsa2048 | tr -d '\n')
```
then set the signature on the login endpoint:
```shell-session
$ vault write auth/aws/login role=dev-role \
pkcs7=$SIGNATURE
```
For the iam auth method, generating the signed request is a non-standard
operation. The Vault cli supports generating this for you:
```shell-session
$ vault login -method=aws header_value=vault.example.com role=dev-role-iam
```
This assumes you have AWS credentials configured in the standard locations AWS
SDKs search for credentials (environment variables, ~/.aws/credentials, IAM
instance profile, or ECS task role, in that order). If you do not have IAM
credentials available at any of these locations, you can explicitly pass them
in on the command line (though this is not recommended), omitting
`aws_security_token` if not applicable.
```shell-session
$ vault login -method=aws header_value=vault.example.com role=dev-role-iam \
aws_access_key_id=<access_key> \
aws_secret_access_key=<secret_key> \
aws_security_token=<security_token>
```
The region used defaults to `us-east-1`, but you can specify a custom region like so:
```shell-session
$ vault login -method=aws region=us-west-2 role=dev-role-iam
```
If the region is specified as `auto`, the Vault CLI will determine the region based
on standard AWS credentials precedence as described earlier. Whichever method is used,
be sure the designated region corresponds to that of the STS endpoint you're using.
~> **Note:** If you are making use of AWS GovCloud and setting the `sts_endpoint`
and `sts_region` role parameters to `us-gov-west-1` / `us-gov-east-1` then you must include
the `region` argument in your login request with a matching value, i.e. `region=us-gov-west-1`.
An example of how to generate the required request values for the `login` method
can be found found in the [vault cli
source code](https://github.com/hashicorp/vault/blob/main/builtin/credential/aws/cli.go).
Using an approach such as this, the request parameters can be generated and
passed to the `login` method:
```shell-session
$ vault write auth/aws/login role=dev-role-iam \
iam_http_request_method=POST \
iam_request_url=aHR0cHM6Ly9zdHMuYW1hem9uYXdzLmNvbS8= \
iam_request_body=QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNQ== \
iam_request_headers=eyJDb250ZW50LUxlbmd0aCI6IFsiNDMiXSwgIlVzZXItQWdlbnQiOiBbImF3cy1zZGstZ28vMS40LjEyIChnbzEuNy4xOyBsaW51eDsgYW1kNjQpIl0sICJYLVZhdWx0LUFXU0lBTS1TZXJ2ZXItSWQiOiBbInZhdWx0LmV4YW1wbGUuY29tIl0sICJYLUFtei1EYXRlIjogWyIyMDE2MDkzMFQwNDMxMjFaIl0sICJDb250ZW50LVR5cGUiOiBbImFwcGxpY2F0aW9uL3gtd3d3LWZvcm0tdXJsZW5jb2RlZDsgY2hhcnNldD11dGYtOCJdLCAiQXV0aG9yaXphdGlvbiI6IFsiQVdTNC1ITUFDLVNIQTI1NiBDcmVkZW50aWFsPWZvby8yMDE2MDkzMC91cy1lYXN0LTEvc3RzL2F3czRfcmVxdWVzdCwgU2lnbmVkSGVhZGVycz1jb250ZW50LWxlbmd0aDtjb250ZW50LXR5cGU7aG9zdDt4LWFtei1kYXRlO3gtdmF1bHQtc2VydmVyLCBTaWduYXR1cmU9YTY5ZmQ3NTBhMzQ0NWM0ZTU1M2UxYjNlNzlkM2RhOTBlZWY1NDA0N2YxZWI0ZWZlOGZmYmM5YzQyOGMyNjU1YiJdfQ==
```
### Via the API
#### Enable AWS authentication in Vault.
```
curl -X POST -H "X-Vault-Token:123" "http://127.0.0.1:8200/v1/sys/auth/aws" -d '{"type":"aws"}'
```
#### Configure the credentials required to make AWS API calls.
```
curl -X POST -H "X-Vault-Token:123" "http://127.0.0.1:8200/v1/auth/aws/config/client" -d '{"access_key":"VKIAJBRHKH6EVTTNXDHA", "secret_key":"vCtSM8ZUEQ3mOFVlYPBQkf2sO6F/W7a5TVzrl3Oj"}'
```
#### Configure the policies on the role.
```
curl -X POST -H "X-Vault-Token:123" "http://127.0.0.1:8200/v1/auth/aws/role/dev-role" -d '{"bound_ami_id":"ami-fce3c696","policies":"prod,dev","max_ttl":"500h"}'
curl -X POST -H "X-Vault-Token:123" "http://127.0.0.1:8200/v1/auth/aws/role/dev-role-iam" -d '{"auth_type":"iam","policies":"prod,dev","max_ttl":"500h","bound_iam_principal_arn":"arn:aws:iam::123456789012:role/MyRole"}'
```
#### Perform the login operation
```
curl -X POST "http://127.0.0.1:8200/v1/auth/aws/login" -d '{"role":"dev-role","pkcs7":"'$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/rsa2048 | tr -d '\n')'","nonce":"5defbf9e-a8f9-3063-bdfc-54b7a42a1f95"}'
curl -X POST "http://127.0.0.1:8200/v1/auth/aws/login" -d '{"role":"dev", "iam_http_request_method": "POST", "iam_request_url": "aHR0cHM6Ly9zdHMuYW1hem9uYXdzLmNvbS8=", "iam_request_body": "QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNQ==", "iam_request_headers": "eyJDb250ZW50LUxlbmd0aCI6IFsiNDMiXSwgIlVzZXItQWdlbnQiOiBbImF3cy1zZGstZ28vMS40LjEyIChnbzEuNy4xOyBsaW51eDsgYW1kNjQpIl0sICJYLVZhdWx0LUFXU0lBTS1TZXJ2ZXItSWQiOiBbInZhdWx0LmV4YW1wbGUuY29tIl0sICJYLUFtei1EYXRlIjogWyIyMDE2MDkzMFQwNDMxMjFaIl0sICJDb250ZW50LVR5cGUiOiBbImFwcGxpY2F0aW9uL3gtd3d3LWZvcm0tdXJsZW5jb2RlZDsgY2hhcnNldD11dGYtOCJdLCAiQXV0aG9yaXphdGlvbiI6IFsiQVdTNC1ITUFDLVNIQTI1NiBDcmVkZW50aWFsPWZvby8yMDE2MDkzMC91cy1lYXN0LTEvc3RzL2F3czRfcmVxdWVzdCwgU2lnbmVkSGVhZGVycz1jb250ZW50LWxlbmd0aDtjb250ZW50LXR5cGU7aG9zdDt4LWFtei1kYXRlO3gtdmF1bHQtc2VydmVyLCBTaWduYXR1cmU9YTY5ZmQ3NTBhMzQ0NWM0ZTU1M2UxYjNlNzlkM2RhOTBlZWY1NDA0N2YxZWI0ZWZlOGZmYmM5YzQyOGMyNjU1YiJdfQ==" }'
```
The response will be in JSON. For example:
```javascript
{
"auth": {
"renewable": true,
"lease_duration": 72000,
"metadata": {
"role_tag_max_ttl": "0s",
"role": "ami-f083709d",
"region": "us-east-1",
"nonce": "5defbf9e-a8f9-3063-bdfc-54b7a42a1f95",
"instance_id": "i-a832f734",
"ami_id": "ami-f083709d"
},
"policies": [
"default",
"dev",
"prod"
],
"accessor": "5cd96cd1-58b7-2904-5519-75ddf957ec06",
"client_token": "150fc858-2402-49c9-56a5-f4b57f2c8ff1"
},
"warnings": null,
"wrap_info": null,
"data": null,
"lease_duration": 0,
"renewable": false,
"lease_id": "",
"request_id": "d7d50c06-56b8-37f4-606c-ccdc87a1ee4c"
}
```
## API
The AWS auth method has a full HTTP API. Please see the
[AWS Auth API](/vault/api-docs/auth/aws) for more
details.
## Code example
The following example demonstrates the AWS (IAM) auth method to authenticate with Vault.
<CodeTabs>
<CodeBlockConfig>
```go
package main
import (
"context"
"fmt"
vault "github.com/hashicorp/vault/api"
auth "github.com/hashicorp/vault/api/auth/aws"
)
// Fetches a key-value secret (kv-v2) after authenticating to Vault via AWS IAM,
// one of two auth methods used to authenticate with AWS (the other is EC2 auth).
func getSecretWithAWSAuthIAM() (string, error) {
config := vault.DefaultConfig() // modify for more granular configuration
client, err := vault.NewClient(config)
if err != nil {
return "", fmt.Errorf("unable to initialize Vault client: %w", err)
}
awsAuth, err := auth.NewAWSAuth(
auth.WithRole("dev-role-iam"), // if not provided, Vault will fall back on looking for a role with the IAM role name if you're using the iam auth type, or the EC2 instance's AMI id if using the ec2 auth type
)
if err != nil {
return "", fmt.Errorf("unable to initialize AWS auth method: %w", err)
}
authInfo, err := client.Auth().Login(context.Background(), awsAuth)
if err != nil {
return "", fmt.Errorf("unable to login to AWS auth method: %w", err)
}
if authInfo == nil {
return "", fmt.Errorf("no auth info was returned after login")
}
// get secret from the default mount path for KV v2 in dev mode, "secret"
secret, err := client.KVv2("secret").Get(context.Background(), "creds")
if err != nil {
return "", fmt.Errorf("unable to read secret: %w", err)
}
// data map can contain more than one key-value pair,
// in this case we're just grabbing one of them
value, ok := secret.Data["password"].(string)
if !ok {
return "", fmt.Errorf("value type assertion failed: %T %#v", secret.Data["password"], secret.Data["password"])
}
return value, nil
}
```
</CodeBlockConfig>
<CodeBlockConfig>
```cs
using System;
using System.Text;
using Amazon.Runtime;
using Amazon.Runtime.Internal;
using Amazon.Runtime.Internal.Auth;
using Amazon.Runtime.Internal.Util;
using Amazon.SecurityToken;
using Amazon.SecurityToken.Model;
using Amazon.SecurityToken.Model.Internal.MarshallTransformations;
using Newtonsoft.Json;
using VaultSharp;
using VaultSharp.V1.AuthMethods;
using VaultSharp.V1.AuthMethods.AWS;
using VaultSharp.V1.Commons;
using VaultSharp.V1.SecretsEngines.AWS;
namespace Examples
{
public class AwsAuthExample
{
/// <summary>
/// Fetches a key-value secret (kv-v2) after authenticating to Vault via AWS IAM,
/// one of two auth methods used to authenticate with AWS (the other is EC2 auth).
/// </summary>
public string GetSecretAWSAuthIAM()
{
var vaultAddr = Environment.GetEnvironmentVariable("VAULT_ADDR");
if(String.IsNullOrEmpty(vaultAddr))
{
throw new System.ArgumentNullException("Vault Address");
}
var roleName = Environment.GetEnvironmentVariable("VAULT_ROLE");
if(String.IsNullOrEmpty(roleName))
{
throw new System.ArgumentNullException("Vault Role Name");
}
var amazonSecurityTokenServiceConfig = new AmazonSecurityTokenServiceConfig();
// Initialize BasicAWS Credentials w/ an accessKey and secretKey
Amazon.Runtime.AWSCredentials awsCredentials = new BasicAWSCredentials(accessKey: Environment.GetEnvironmentVariable("AWS_ACCESS_KEY_ID"),
secretKey: Environment.GetEnvironmentVariable("AWS_SECRET_ACCESS_KEY"));
// Construct the IAM Request and add necessary headers
var iamRequest = GetCallerIdentityRequestMarshaller.Instance.Marshall(new GetCallerIdentityRequest());
iamRequest.Endpoint = new Uri(amazonSecurityTokenServiceConfig.DetermineServiceURL());
iamRequest.ResourcePath = "/";
iamRequest.Headers.Add("User-Agent", "some-agent");
iamRequest.Headers.Add("X-Amz-Security-Token", awsCredentials.GetCredentials().Token);
iamRequest.Headers.Add("Content-Type", "application/x-www-form-urlencoded; charset=utf-8");
new AWS4Signer().Sign(iamRequest, amazonSecurityTokenServiceConfig, new RequestMetrics(), awsCredentials.GetCredentials().AccessKey, awsCredentials.GetCredentials().SecretKey);
var iamSTSRequestHeaders = iamRequest.Headers;
// Convert headers to Base64 encoded version
var base64EncodedIamRequestHeaders = Convert.ToBase64String(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(iamSTSRequestHeaders)));
IAuthMethodInfo authMethod = new IAMAWSAuthMethodInfo(roleName: roleName, requestHeaders: base64EncodedIamRequestHeaders);
var vaultClientSettings = new VaultClientSettings(vaultAddr, authMethod);
IVaultClient vaultClient = new VaultClient(vaultClientSettings);
// We can retrieve the secret from the VaultClient object
Secret<SecretData> kv2Secret = null;
kv2Secret = vaultClient.V1.Secrets.KeyValue.V2.ReadSecretAsync(path: "/creds").Result;
var password = kv2Secret.Data.Data["password"];
return password.ToString();
}
}
}
```
</CodeBlockConfig>
</CodeTabs> | vault | layout docs page title AWS Auth Methods description The aws auth method allows automated authentication of AWS entities AWS auth method include x509 sha1 deprecation mdx include aws sha1 deprecation mdx The aws auth method provides an automated mechanism to retrieve a Vault token for IAM principals and AWS EC2 instances Unlike most Vault auth methods this method does not require manual first deploying or provisioning security sensitive credentials tokens username password client certificates etc by operators under many circumstances Authentication workflow There are two authentication types present in the aws auth method iam and ec2 With the iam method a special AWS request signed with AWS IAM credentials is used for authentication The IAM credentials are automatically supplied to AWS instances in IAM instance profiles Lambda functions and others and it is this information already provided by AWS which Vault can use to authenticate clients With the ec2 method AWS is treated as a Trusted Third Party and cryptographically signed dynamic metadata information that uniquely represents each EC2 instance is used for authentication This metadata information is automatically supplied by AWS to all EC2 instances Based on how you attempt to authenticate Vault will determine if you are attempting to use the iam or ec2 type Each has a different authentication workflow and each can solve different use cases Note The ec2 method was implemented before the primitives to implement the iam method were supported by AWS The iam method is the recommended approach as it is more flexible and aligns with best practices to perform access control and authentication See the section on comparing the two auth methods below for more information Usage See the Authentication authentication section for Vault CLI and API usage examples The Code Example code example section provides a code snippet demonstrating the authentication with Vault using the AWS auth method IAM auth method The AWS STS API includes a method sts GetCallerIdentity http docs aws amazon com STS latest APIReference API GetCallerIdentity html which allows you to validate the identity of a client The client signs a GetCallerIdentity query using the AWS Signature v4 algorithm http docs aws amazon com general latest gr sigv4 signing html and sends it to the Vault server The credentials used to sign the GetCallerIdentity request can come from the EC2 instance metadata service for an EC2 instance or from the AWS environment variables in an AWS Lambda function execution which obviates the need for an operator to manually provision some sort of identity material first However the credentials can in principle come from anywhere not just from the locations AWS has provided for you The GetCallerIdentity query consists of four pieces of information the request URL the request body the request headers and the request method as the AWS signature is computed over those fields The Vault server reconstructs the query using this information and forwards it on to the AWS STS service Depending on the response from the STS service the server authenticates the client Notably clients don t need network level access themselves to talk to the AWS STS API endpoint they merely need access to the credentials to sign the request However it means that the Vault server does need network level access to send requests to the STS endpoint Each signed AWS request includes the current timestamp to mitigate the risk of replay attacks In addition Vault allows you to require an additional header X Vault AWS IAM Server ID to be present to mitigate against different types of replay attacks such as a signed GetCallerIdentity request stolen from a dev Vault instance and used to authenticate to a prod Vault instance Vault further requires that this header be one of the headers included in the AWS signature and relies upon AWS to authenticate that signature While AWS API endpoints support both signed GET and POST requests for simplicity the aws auth method supports only POST requests It also does not support presigned requests i e requests with X Amz Credential X Amz Signature and X Amz SignedHeaders GET query parameters containing the authenticating information It s also important to note that Amazon does NOT appear to include any sort of authorization around calls to GetCallerIdentity For example if you have an IAM policy on your credential that requires all access to be MFA authenticated non MFA authenticated credentials i e raw credentials not those retrieved by calling GetSessionToken and supplying an MFA code will still be able to authenticate to Vault using this method It does not appear possible to enforce an IAM principal to be MFA authenticated while authenticating to Vault EC2 auth method Amazon EC2 instances have access to metadata which describes the instance The Vault EC2 auth method leverages the components of this metadata to authenticate and distribute an initial Vault token to an EC2 instance The data flow which is also represented in the graphic below is as follows Vault AWS EC2 Authentication Flow img vault aws ec2 auth flow png img vault aws ec2 auth flow png 1 An AWS EC2 instance fetches its AWS Instance Identity Document aws iid from the EC2 Metadata Service aws ec2 mds In addition to data itself AWS also provides the PKCS 7 signature of the data and publishes the public keys by region which can be used to verify the signature 1 The AWS EC2 instance makes a request to Vault with the PKCS 7 signature The PKCS 7 signature contains the Instance Identity Document 1 Vault verifies the signature on the PKCS 7 document ensuring the information is certified accurate by AWS This process validates both the validity and integrity of the document data As an added security measure Vault verifies that the instance is currently running using the public EC2 API endpoint 1 Provided all steps are successful Vault returns the initial Vault token to the EC2 instance This token is mapped to any configured policies based on the instance metadata aws iid http docs aws amazon com AWSEC2 latest UserGuide instance identity documents html aws ec2 mds http docs aws amazon com AWSEC2 latest UserGuide ec2 instance metadata html There are various modifications to this workflow that provide more or less security as detailed later in this documentation Authorization workflow The basic mechanism of operation is per role Roles are registered in the method and associated with a specific authentication type that cannot be changed once the role has been created Roles can also be associated with various optional restrictions such as the set of allowed policies and max TTLs on the generated tokens Each role can be specified with the constraints that are to be met during the login Many of these constraints accept lists of required values For any constraint which accepts a list of values that constraint will be considered satisfied if any one of the values is matched during the login process For example one such constraint that is supported is to bind against a list of AMI IDs A role which is bound to a specific list of AMIs can only be used for login by EC2 instances that are deployed to one of the AMIs that the role is bound to The iam auth method allows you to specify bound IAM principal ARNs Clients authenticating to Vault must have an ARN that matches one of the ARNs bound to the role they are attempting to login to The bound ARN allows specifying a wildcard at the end of the bound ARN For example if the bound ARN were arn aws iam 123456789012 it would allow any principal in AWS account 123456789012 to login to it Similarly if it were arn aws iam 123456789012 role it would allow any IAM role in the AWS account to login to it If you wish to specify a wildcard you must give Vault iam GetUser and iam GetRole permissions to properly resolve the full user path In general role bindings that are specific to an EC2 instance are only checked when the ec2 auth method is used to login while bindings specific to IAM principals are only checked when the iam auth method is used to login However the iam method includes the ability for you to infer an EC2 instance ID from the authenticated client and apply many of the bindings that would otherwise only apply specifically to EC2 instances In many cases an organization will use a seed AMI that is specialized after bootup by configuration management or similar processes For this reason a role entry in the method can also be associated with a role tag when using the ec2 auth type These tags are generated by the method and are placed as the value of a tag with the given key on the EC2 instance The role tag can be used to further restrict the parameters set on the role but cannot be used to grant additional privileges If a role with an AMI bind constraint has role tag enabled on the role and the EC2 instance performing login does not have an expected tag on it or if the tag on the instance is deleted for some reason authentication fails The role tags can be generated at will by an operator with appropriate API access They are HMAC signed by a per role key stored within the method allowing the method to verify the authenticity of a found role tag and ensure that it has not been tampered with There is also a mechanism to deny list role tags if one has been found to be distributed outside of its intended set of machines IAM authentication inferences With the iam auth method normally Vault will see the IAM principal that authenticated either the IAM user or role However when you have an EC2 instance in an IAM instance profile Vault can actually see the instance ID of the instance and can infer that it s an EC2 instance However there are important security caveats to be aware of before configuring Vault to make that inference Each AWS IAM role has a trust policy which specifies which entities are trusted to call sts AssumeRole http docs aws amazon com STS latest APIReference API AssumeRole html on the role and retrieve credentials that can be used to authenticate with that role When AssumeRole is called a parameter called RoleSessionName is passed in which is chosen arbitrarily by the entity which calls AssumeRole If you have a role with an ARN arn aws iam 123456789012 role MyRole then the credentials returned by calling AssumeRole on that role will be arn aws sts 123456789012 assumed role MyRole RoleSessionName where RoleSessionName is the session name in the AssumeRole API call It is this latter value which Vault actually sees When you have an EC2 instance in an instance profile the corresponding role s trust policy specifies that the principal Service ec2 amazonaws com is trusted to call AssumeRole When this is configured EC2 calls AssumeRole on behalf of your instance with a RoleSessionName corresponding to the instance s instance ID Thus it is possible for Vault to extract the instance ID out of the value it sees when an EC2 instance in an instance profile authenticates to Vault with the iam auth method This is known as inferencing Vault can be configured on a role by role basis to infer that a caller is an EC2 instance and if so apply further bindings that apply specifically to EC2 instances most of the bindings available to the ec2 auth method However it is very important to note that if any entity other than an AWS service is permitted to call AssumeRole on your role then that entity can simply pass in your instance s instance ID and spoof your instance to Vault This also means that anybody who is able to modify your role s trust policy e g via iam UpdateAssumeRolePolicy http docs aws amazon com IAM latest APIReference API UpdateAssumeRolePolicy html then that person could also spoof your instances If this is a concern but you would like to take advantage of inferencing then you should tightly restrict who is able to call AssumeRole on the role tightly restrict who is able to call UpdateAssumeRolePolicy on the role and monitor CloudTrail logs for calls to AssumeRole and UpdateAssumeRolePolicy All of these caveats apply equally to using the iam auth method without inferencing the point is merely that Vault cannot offer an iron clad guarantee about the inference and it is up to operators to determine based on their own AWS controls and use cases whether or not it s appropriate to configure inferencing Mixing authentication types Vault allows you to configure using either the ec2 auth method or the iam auth method but not both auth methods Further assumed roles are not supported and Vault prevents you from enforcing restrictions that it cannot enforce given the chosen auth type for a role Some examples of how this works in practice 1 You configure a role with the ec2 auth type with a bound AMI ID A client would not be able to login using the iam auth type 2 You configure a role with the iam auth type with a bound IAM principal ARN A client would not be able to login with the ec2 auth method 3 You configure a role with the iam auth type and further configure inferencing You have a bound AMI ID and a bound IAM principal ARN A client must login using the iam method the RoleSessionName must be a valid instance ID viewable by Vault and the instance must have come from the bound AMI ID Comparison of the IAM and EC2 methods The iam and ec2 auth methods serve similar and somewhat overlapping functionality in that both authenticate some type of AWS entity to Vault Here are some comparisons that illustrate why iam method is preferred over ec2 What type of entity is authenticated The ec2 auth method authenticates only AWS EC2 instances and is specialized to handle EC2 instances such as restricting access to EC2 instances from a particular AMI EC2 instances in a particular instance profile or EC2 instances with a specialized tag value via the role tag feature The iam auth method authenticates AWS IAM principals This can include IAM users IAM roles assumed from other accounts AWS Lambdas that are launched in an IAM role or even EC2 instances that are launched in an IAM instance profile However because it authenticates more generalized IAM principals this method doesn t offer more granular controls beyond binding to a given IAM principal without the use of inferencing How the entities are authenticated The ec2 auth method authenticates instances by making use of the EC2 instance identity document which is a cryptographically signed document containing metadata about the instance This document changes relatively infrequently so Vault adds a number of other constructs to mitigate against replay attacks such as client nonces role tags instance migrations etc Because the instance identity document is signed by AWS you have a strong guarantee that it came from an EC2 instance The iam auth method authenticates by having clients provide a specially signed AWS API request which the method then passes on to AWS to validate the signature and tell Vault who created it The actual secret i e the AWS secret access key is never transmitted over the wire and the AWS signature algorithm automatically expires requests after 15 minutes providing simple and robust protection against replay attacks The use of inferencing however provides a weaker guarantee that the credentials came from an EC2 instance in an IAM instance profile compared to the ec2 authentication mechanism The instance identity document used in the ec2 auth method is more likely to be stolen given its relatively static nature but it s harder to spoof On the other hand the credentials of an EC2 instance in an IAM instance profile are less likely to be stolen given their dynamic and short lived nature but it s easier to spoof credentials that might have come from an EC2 instance Specific use cases If you have non EC2 instance entities such as IAM users Lambdas in IAM roles or developer laptops using AdRoll s Hologram https github com AdRoll hologram then you would need to use the iam auth method If you have EC2 instances then you could use either auth method If you need more granular filtering beyond just the instance profile of given EC2 instances such as filtering based off the AMI the instance was launched from then you would need to use the ec2 auth method change the instance profile associated with your EC2 instances so they have unique IAM roles for each different Vault role you would want them to authenticate to or make use of inferencing If you need to make use of role tags then you will need to use the ec2 auth method Recommended Vault IAM policy This specifies the recommended IAM policy needed by the AWS auth method Note that if you are using the same credentials for the AWS auth and secret methods e g if you re running Vault on an EC2 instance in an IAM instance profile then you will need to add additional permissions as required by the AWS secret method json Version 2012 10 17 Statement Effect Allow Action ec2 DescribeInstances iam GetInstanceProfile iam GetUser iam GetRole Resource Effect Allow Action sts AssumeRole Resource arn aws iam AccountId role VaultRole Sid ManageOwnAccessKeys Effect Allow Action iam CreateAccessKey iam DeleteAccessKey iam GetAccessKeyLastUsed iam GetUser iam ListAccessKeys iam UpdateAccessKey Resource arn aws iam user aws username Here are some of the scenarios in which Vault would need to use each of these permissions This isn t intended to be an exhaustive list of all the scenarios in which Vault might make an AWS API call but rather illustrative of why these are needed ec2 DescribeInstances is necessary when you are using the ec2 auth method or when you are inferring an ec2 instance entity type to validate that the EC2 instance meets binding requirements of the role iam GetInstanceProfile is used when you have a bound iam role arn in the ec2 auth method Vault needs to determine which IAM role is attached to the instance profile iam GetUser and iam GetRole are used when using the iam auth method and binding to an IAM user or role principal to determine the AWS IAM Unique Identifiers https docs aws amazon com IAM latest UserGuide reference identifiers html identifiers unique ids or when using a wildcard on the bound ARN to resolve the full ARN of the user or role The sts AssumeRole stanza is necessary when you are using Cross Account Access cross account access The Resource s specified should be a list of all the roles for which you have configured cross account access and each of those roles should have this IAM policy attached except for the sts AssumeRole statement The ManageOwnAccessKeys stanza is necessary when you have configured Vault with static credentials and wish to rotate these credentials with the Rotate Root Credentials vault api docs auth aws rotate root credentials API call Plugin Workload Identity Federation WIF EnterpriseAlert product vault The AWS auth engine supports the plugin WIF workflow and has a source of identity called a plugin identity token A plugin identity token is a JWT that is signed internally by the Vault s plugin identity token issuer vault api docs secret identity tokens read plugin workload identity issuer s openid configuration If there is a trust relationship configured between Vault and AWS through workload identity federation https docs aws amazon com IAM latest UserGuide id roles providers oidc html the auth engine can exchange its identity token for short lived STS credentials needed to perform its actions Exchanging identity tokens for STS credentials lets the AWS auth engine operate without configuring explicit access to sensitive IAM security credentials To configure the auth engine to use plugin WIF 1 Ensure that Vault openid configuration vault api docs secret identity tokens read plugin identity token issuer s openid configuration and public JWKS vault api docs secret identity tokens read plugin identity token issuer s public jwks APIs are network reachable by AWS We recommend using an API proxy or gateway if you need to limit Vault API exposure 1 Create an IAM OIDC identity provider https docs aws amazon com IAM latest UserGuide id roles providers create oidc html in AWS 1 The provider URL must point at your Vault plugin identity token issuer vault api docs secret identity tokens read plugin workload identity issuer s openid configuration with the well known openid configuration suffix removed For example https host port v1 identity oidc plugins 1 Uniquely identify the recipient of the plugin identity token as the audience In AWS the recipient is the identity provider We recommend using the host port v1 identity oidc plugins portion of the provider URL as your recipient since it will be unique for each configured identity provider 1 Create a web identity role https docs aws amazon com IAM latest UserGuide id roles create for idp oidc html idp oidc Create in AWS with the same audience used for your IAM OIDC identity provider 1 Configure the AWS auth engine with the IAM OIDC audience value and web identity role ARN shell session vault write auth aws config client identity token audience vault example v1 identity oidc plugins role arn arn aws iam 123456789123 role example web identity role Your auth engine can now use plugin WIF for its configuration credentials By default WIF credentials https docs aws amazon com STS latest APIReference API AssumeRoleWithWebIdentity html have a time to live of 1 hour and automatically refresh when they expire Please see the API documentation vault api docs auth aws configure client for more details on the fields associated with plugin WIF Client nonce Note this only applies to the ec2 auth method If an unintended party gains access to the PKCS 7 signature of the identity document which by default is available to every process and user that gains access to an EC2 instance it can impersonate that instance and fetch a Vault token The method addresses this problem by using a Trust On First Use TOFU mechanism that allows the first client to present the PKCS 7 signature of the document to be authenticated and denying the rest An important property of this design is detection of unauthorized access if an unintended party authenticates the intended client will be unable to authenticate and can raise an alert for investigation During the first login the method stores the instance ID that authenticated in a accesslist One method of operation of the method is to disallow any authentication attempt for an instance ID contained in the access list using the disallow reauthentication option on the role meaning that an instance is allowed to login only once However this has consequences for token rotation as it means that once a token has expired subsequent authentication attempts would fail By default reauthentication is enabled in this method and can be turned off using disallow reauthentication parameter on the registered role In the default method of operation the method will return a unique nonce during the first authentication attempt as part of auth metadata Clients should present this nonce for subsequent login attempts and it should match the nonce cached at the identity accesslist entry at the method Since only the original client knows the nonce only the original client is allowed to reauthenticate This is the reason that this is a accesslist rather than a deny list by default it s keeping track of clients allowed to reauthenticate rather than those that are not Clients can choose to provide a nonce even for the first login attempt in which case the provided nonce will be tied to the cached identity accesslist entry It is recommended to use a strong nonce value in this case It is up to the client to behave correctly with respect to the nonce if the client stores the nonce on disk it can survive reboots but could also give access to other users or applications on the instance It is also up to the operator to ensure that client nonces are in fact unique sharing nonces allows a compromise of the nonce value to enable an attacker that gains access to any EC2 instance to imitate the legitimate client on that instance This is why nonces can be disabled on the method side in favor of only a single authentication per instance in some cases such as when using ASGs instances are immutable and single boot anyways and in conjunction with a high max TTL reauthentication may not be needed and if it is the instance can simply be shut down and allow ASG to start a new one In both cases entries can be removed from the accesslist by instance ID allowing reauthentication by a client if the nonce is lost or not used and an operator approves the process One other point if available by the OS distribution being used with the EC2 instance it is not a bad idea to firewall access to the signed PKCS 7 metadata to ensure that it is accessible only to the matching user s that require access The client nonce which is generated by the backend and which gets returned along with the authentication response will be audit logged in plaintext If this is undesired clients can supply a custom nonce to the login endpoint which will not be returned and hence will not be audit logged Advanced options and caveats Dynamic management of policies via role tags Note This only applies to the ec2 auth method or the iam auth method when inferencing is used If the instance is required to have customized set of policies based on the role it plays the role tag option can be used to provide a tag to set on instances for a given role When this option is set during login along with verification of PKCS 7 signature and instance health the method will query for the value of a specific tag with the configured key that is attached to the instance The tag holds information that represents a subset of privileges that are set on the role and are used to further restrict the set of the role s privileges for that particular instance A role tag can be created using auth aws role role tag endpoint and is immutable The information present in the tag is SHA256 hashed and HMAC protected The per role key to HMAC is only maintained in the method This prevents an adversarial operator from modifying the tag when setting it on the EC2 instance in order to escalate privileges When role tag option is enabled on a role the instances are required to have a role tag If the tag is not found on the EC2 instance authentication will fail This is to ensure that privileges of an instance are never escalated for not having the tag on it or for getting the tag removed If the role tag creation does not specify the policy component the client will inherit the allowed policies set on the role If the role tag creation specifies the policy component but it contains no policies the token will contain only the default policy by default this policy allows only manipulation revocation renewal lookup of the existing token plus access to its cubbyhole vault docs secrets cubbyhole This can be useful to allow instances access to a secure scratch space for storing data via the token s cubbyhole but without granting any access to other resources provided by or resident in Vault Handling lost client nonces Note This only applies to the ec2 auth method If an EC2 instance loses its client nonce due to a reboot a stop start of the client etc subsequent login attempts will not succeed If the client nonce is lost normally the only option is to delete the entry corresponding to the instance ID from the identity accesslist in the method This can be done via the auth aws identity accesslist instance id endpoint This allows a new client nonce to be accepted by the method during the next login request Under certain circumstances there is another useful setting When the instance is placed onto a host upon creation it is given a pendingTime value in the instance identity document documentation from AWS does not cover this option unfortunately If an instance is stopped and started the pendingTime value is updated this does not apply to reboots however The method can take advantage of this via the allow instance migration option which is set per role When this option is enabled if the client nonce does not match the saved nonce the pendingTime value in the instance identity document will be checked if it is newer than the stored pendingTime value the method assumes that the client was stopped started and allows the client to log in successfully storing the new nonce as the valid nonce for that client This essentially re starts the TOFU mechanism any time the instance is stopped and started so should be used with caution Just like with initial authentication the legitimate client should have a way to alert or an alert should trigger based on its logs if it is denied authentication Unfortunately the allow instance migration only helps during stop start actions the current metadata does not provide for a way to allow this automatic behavior during reboots The method will be updated if this needed metadata becomes available The allow instance migration option is set per role and can also be specified in a role tag Since role tags can only restrict behavior if the option is set to false on the role a value of true in the role tag takes effect however if the option is set to true on the role a value set in the role tag has no effect Disabling reauthentication Note this only applies to the ec2 auth method If in a given organization s architecture a client fetches a long lived Vault token and has no need to rotate the token all future logins for that instance ID can be disabled If the option disallow reauthentication is set only one login will be allowed per instance If the intended client successfully retrieves a token during login it can be sure that its token will not be hijacked by another entity When disallow reauthentication option is enabled the client can choose not to supply a nonce during login although it is not an error to do so the nonce is simply ignored Note that reauthentication is enabled by default If only a single login is desired disallow reauthentication should be set explicitly on the role or on the role tag The disallow reauthentication option is set per role and can also be specified in a role tag Since role tags can only restrict behavior if the option is set to false on the role a value of true in the role tag takes effect however if the option is set to true on the role a value set in the role tag has no effect Deny listing role tags Note this only applies to the ec2 auth method or the iam auth method when inferencing is used Role tags are tied to a specific role but the method has no control over which instances using that role should have any particular role tag that is purely up to the operator Although role tags are only restrictive a tag cannot escalate privileges above what is set on its role if a role tag is found to have been used incorrectly and the administrator wants to ensure that the role tag has no further effect the role tag can be placed on a deny list via the endpoint auth aws roletag denylist role tag Note that this will not invalidate the tokens that were already issued this only blocks any further login requests from those instances that have the deny listed tag attached to them Expiration times and tidying of denylist and accesslist entries The expired entries in both identity accesslist and role tag denylist are deleted automatically The entries in both of these lists contain an expiration time which is dynamically determined by three factors max ttl set on the role max ttl set on the role tag and max ttl value of the method mount The least of these three dictates the maximum TTL of the issued token and correspondingly will be set as the expiration times of these entries The endpoints auth aws tidy identity accesslist and auth aws tidy roletag denylist are provided to clean up the entries present in these lists These endpoints allow defining a safety buffer such that an entry must not only be expired but be past expiration by the amount of time dictated by the safety buffer in order to actually remove the entry Automatic deletion of expired entries is performed by the periodic function of the method This function does the tidying of both access list role tags and access list identities Periodic tidying is activated by default and will have a safety buffer of 72 hours meaning only those entries are deleted which were expired before 72 hours from when the tidy operation is being performed This can be configured via config tidy roletag denylist and config tidy identity accesslist endpoints Varying public certificates Note this only applies to the ec2 auth method The AWS public certificate which contains the public key used to verify the PKCS 7 signature varies for different AWS regions The primary AWS public certificate which covers most AWS regions is already included in Vault and does not need to be added Instances whose PKCS 7 signatures cannot be verified by the default public certificate included in Vault can register a different public certificate which can be found here http docs aws amazon com AWSEC2 latest UserGuide instance identity documents html via the auth aws config certificate cert name endpoint Dangling tokens An EC2 instance after authenticating itself with the method gets a Vault token After that if the instance terminates or goes down for any reason the method will not be aware of such events The token issued will still be valid until it expires The token will likely be expired sooner than its lifetime when the instance fails to renew the token on time Cross account access To allow Vault to authenticate IAM principals and EC2 instances in other accounts Vault supports using AWS STS Security Token Service to assume AWS IAM Roles in other accounts For each target AWS account ID you configure the IAM Role for Vault to assume using the auth aws config sts account id and Vault will use credentials from assuming that role to validate IAM principals and EC2 instances in the target account The account in which Vault is running i e the master account must be listed as a trusted entity in the IAM Role being assumed on the remote account The Role itself should allow the permissions specified in the Recommended Vault IAM Policy recommended vault iam policy except it doesn t need any further sts AssumeRole permissions Furthermore in the master account Vault must be granted the action sts AssumeRole for the IAM Role to be assumed AWS instance metadata timeout include aws imds timeout mdx Authentication Via the CLI Enable AWS EC2 authentication in Vault shell session vault auth enable aws Configure the credentials required to make AWS API calls If not specified Vault will attempt to use standard environment variables AWS ACCESS KEY ID and AWS SECRET ACCESS KEY or IAM EC2 instance role credentials if available The IAM account or role to which the credentials map must allow the ec2 DescribeInstances action In addition if IAM Role binding is used see bound iam role arn below iam GetInstanceProfile must also be allowed To provide IAM security credentials to Vault we recommend using Vault plugin workload identity federation plugin workload identity federation wif WIF shell session vault write auth aws config client secret key vCtSM8ZUEQ3mOFVlYPBQkf2sO6F W7a5TVzrl3Oj access key VKIAJBRHKH6EVTTNXDHA vault auth aws config client identity token audience vault example v1 identity oidc plugins role arn arn aws iam 123456789123 role example web identity role Configure the policies on the role shell session vault write auth aws role dev role auth type ec2 bound ami id ami fce3c696 policies prod dev max ttl 500h vault write auth aws role dev role iam auth type iam bound iam principal arn arn aws iam 123456789012 role MyRole policies prod dev max ttl 500h Configure a required X Vault AWS IAM Server ID header recommended shell session vault write auth aws config client iam server id header value vault example com Perform the login operation For the ec2 auth method first fetch the PKCS 7 signature on the AWS instance shell session SIGNATURE curl s http 169 254 169 254 latest dynamic instance identity rsa2048 tr d n then set the signature on the login endpoint shell session vault write auth aws login role dev role pkcs7 SIGNATURE For the iam auth method generating the signed request is a non standard operation The Vault cli supports generating this for you shell session vault login method aws header value vault example com role dev role iam This assumes you have AWS credentials configured in the standard locations AWS SDKs search for credentials environment variables aws credentials IAM instance profile or ECS task role in that order If you do not have IAM credentials available at any of these locations you can explicitly pass them in on the command line though this is not recommended omitting aws security token if not applicable shell session vault login method aws header value vault example com role dev role iam aws access key id access key aws secret access key secret key aws security token security token The region used defaults to us east 1 but you can specify a custom region like so shell session vault login method aws region us west 2 role dev role iam If the region is specified as auto the Vault CLI will determine the region based on standard AWS credentials precedence as described earlier Whichever method is used be sure the designated region corresponds to that of the STS endpoint you re using Note If you are making use of AWS GovCloud and setting the sts endpoint and sts region role parameters to us gov west 1 us gov east 1 then you must include the region argument in your login request with a matching value i e region us gov west 1 An example of how to generate the required request values for the login method can be found found in the vault cli source code https github com hashicorp vault blob main builtin credential aws cli go Using an approach such as this the request parameters can be generated and passed to the login method shell session vault write auth aws login role dev role iam iam http request method POST iam request url aHR0cHM6Ly9zdHMuYW1hem9uYXdzLmNvbS8 iam request body QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNQ iam request headers eyJDb250ZW50LUxlbmd0aCI6IFsiNDMiXSwgIlVzZXItQWdlbnQiOiBbImF3cy1zZGstZ28vMS40LjEyIChnbzEuNy4xOyBsaW51eDsgYW1kNjQpIl0sICJYLVZhdWx0LUFXU0lBTS1TZXJ2ZXItSWQiOiBbInZhdWx0LmV4YW1wbGUuY29tIl0sICJYLUFtei1EYXRlIjogWyIyMDE2MDkzMFQwNDMxMjFaIl0sICJDb250ZW50LVR5cGUiOiBbImFwcGxpY2F0aW9uL3gtd3d3LWZvcm0tdXJsZW5jb2RlZDsgY2hhcnNldD11dGYtOCJdLCAiQXV0aG9yaXphdGlvbiI6IFsiQVdTNC1ITUFDLVNIQTI1NiBDcmVkZW50aWFsPWZvby8yMDE2MDkzMC91cy1lYXN0LTEvc3RzL2F3czRfcmVxdWVzdCwgU2lnbmVkSGVhZGVycz1jb250ZW50LWxlbmd0aDtjb250ZW50LXR5cGU7aG9zdDt4LWFtei1kYXRlO3gtdmF1bHQtc2VydmVyLCBTaWduYXR1cmU9YTY5ZmQ3NTBhMzQ0NWM0ZTU1M2UxYjNlNzlkM2RhOTBlZWY1NDA0N2YxZWI0ZWZlOGZmYmM5YzQyOGMyNjU1YiJdfQ Via the API Enable AWS authentication in Vault curl X POST H X Vault Token 123 http 127 0 0 1 8200 v1 sys auth aws d type aws Configure the credentials required to make AWS API calls curl X POST H X Vault Token 123 http 127 0 0 1 8200 v1 auth aws config client d access key VKIAJBRHKH6EVTTNXDHA secret key vCtSM8ZUEQ3mOFVlYPBQkf2sO6F W7a5TVzrl3Oj Configure the policies on the role curl X POST H X Vault Token 123 http 127 0 0 1 8200 v1 auth aws role dev role d bound ami id ami fce3c696 policies prod dev max ttl 500h curl X POST H X Vault Token 123 http 127 0 0 1 8200 v1 auth aws role dev role iam d auth type iam policies prod dev max ttl 500h bound iam principal arn arn aws iam 123456789012 role MyRole Perform the login operation curl X POST http 127 0 0 1 8200 v1 auth aws login d role dev role pkcs7 curl s http 169 254 169 254 latest dynamic instance identity rsa2048 tr d n nonce 5defbf9e a8f9 3063 bdfc 54b7a42a1f95 curl X POST http 127 0 0 1 8200 v1 auth aws login d role dev iam http request method POST iam request url aHR0cHM6Ly9zdHMuYW1hem9uYXdzLmNvbS8 iam request body QWN0aW9uPUdldENhbGxlcklkZW50aXR5JlZlcnNpb249MjAxMS0wNi0xNQ iam request headers eyJDb250ZW50LUxlbmd0aCI6IFsiNDMiXSwgIlVzZXItQWdlbnQiOiBbImF3cy1zZGstZ28vMS40LjEyIChnbzEuNy4xOyBsaW51eDsgYW1kNjQpIl0sICJYLVZhdWx0LUFXU0lBTS1TZXJ2ZXItSWQiOiBbInZhdWx0LmV4YW1wbGUuY29tIl0sICJYLUFtei1EYXRlIjogWyIyMDE2MDkzMFQwNDMxMjFaIl0sICJDb250ZW50LVR5cGUiOiBbImFwcGxpY2F0aW9uL3gtd3d3LWZvcm0tdXJsZW5jb2RlZDsgY2hhcnNldD11dGYtOCJdLCAiQXV0aG9yaXphdGlvbiI6IFsiQVdTNC1ITUFDLVNIQTI1NiBDcmVkZW50aWFsPWZvby8yMDE2MDkzMC91cy1lYXN0LTEvc3RzL2F3czRfcmVxdWVzdCwgU2lnbmVkSGVhZGVycz1jb250ZW50LWxlbmd0aDtjb250ZW50LXR5cGU7aG9zdDt4LWFtei1kYXRlO3gtdmF1bHQtc2VydmVyLCBTaWduYXR1cmU9YTY5ZmQ3NTBhMzQ0NWM0ZTU1M2UxYjNlNzlkM2RhOTBlZWY1NDA0N2YxZWI0ZWZlOGZmYmM5YzQyOGMyNjU1YiJdfQ The response will be in JSON For example javascript auth renewable true lease duration 72000 metadata role tag max ttl 0s role ami f083709d region us east 1 nonce 5defbf9e a8f9 3063 bdfc 54b7a42a1f95 instance id i a832f734 ami id ami f083709d policies default dev prod accessor 5cd96cd1 58b7 2904 5519 75ddf957ec06 client token 150fc858 2402 49c9 56a5 f4b57f2c8ff1 warnings null wrap info null data null lease duration 0 renewable false lease id request id d7d50c06 56b8 37f4 606c ccdc87a1ee4c API The AWS auth method has a full HTTP API Please see the AWS Auth API vault api docs auth aws for more details Code example The following example demonstrates the AWS IAM auth method to authenticate with Vault CodeTabs CodeBlockConfig go package main import context fmt vault github com hashicorp vault api auth github com hashicorp vault api auth aws Fetches a key value secret kv v2 after authenticating to Vault via AWS IAM one of two auth methods used to authenticate with AWS the other is EC2 auth func getSecretWithAWSAuthIAM string error config vault DefaultConfig modify for more granular configuration client err vault NewClient config if err nil return fmt Errorf unable to initialize Vault client w err awsAuth err auth NewAWSAuth auth WithRole dev role iam if not provided Vault will fall back on looking for a role with the IAM role name if you re using the iam auth type or the EC2 instance s AMI id if using the ec2 auth type if err nil return fmt Errorf unable to initialize AWS auth method w err authInfo err client Auth Login context Background awsAuth if err nil return fmt Errorf unable to login to AWS auth method w err if authInfo nil return fmt Errorf no auth info was returned after login get secret from the default mount path for KV v2 in dev mode secret secret err client KVv2 secret Get context Background creds if err nil return fmt Errorf unable to read secret w err data map can contain more than one key value pair in this case we re just grabbing one of them value ok secret Data password string if ok return fmt Errorf value type assertion failed T v secret Data password secret Data password return value nil CodeBlockConfig CodeBlockConfig cs using System using System Text using Amazon Runtime using Amazon Runtime Internal using Amazon Runtime Internal Auth using Amazon Runtime Internal Util using Amazon SecurityToken using Amazon SecurityToken Model using Amazon SecurityToken Model Internal MarshallTransformations using Newtonsoft Json using VaultSharp using VaultSharp V1 AuthMethods using VaultSharp V1 AuthMethods AWS using VaultSharp V1 Commons using VaultSharp V1 SecretsEngines AWS namespace Examples public class AwsAuthExample summary Fetches a key value secret kv v2 after authenticating to Vault via AWS IAM one of two auth methods used to authenticate with AWS the other is EC2 auth summary public string GetSecretAWSAuthIAM var vaultAddr Environment GetEnvironmentVariable VAULT ADDR if String IsNullOrEmpty vaultAddr throw new System ArgumentNullException Vault Address var roleName Environment GetEnvironmentVariable VAULT ROLE if String IsNullOrEmpty roleName throw new System ArgumentNullException Vault Role Name var amazonSecurityTokenServiceConfig new AmazonSecurityTokenServiceConfig Initialize BasicAWS Credentials w an accessKey and secretKey Amazon Runtime AWSCredentials awsCredentials new BasicAWSCredentials accessKey Environment GetEnvironmentVariable AWS ACCESS KEY ID secretKey Environment GetEnvironmentVariable AWS SECRET ACCESS KEY Construct the IAM Request and add necessary headers var iamRequest GetCallerIdentityRequestMarshaller Instance Marshall new GetCallerIdentityRequest iamRequest Endpoint new Uri amazonSecurityTokenServiceConfig DetermineServiceURL iamRequest ResourcePath iamRequest Headers Add User Agent some agent iamRequest Headers Add X Amz Security Token awsCredentials GetCredentials Token iamRequest Headers Add Content Type application x www form urlencoded charset utf 8 new AWS4Signer Sign iamRequest amazonSecurityTokenServiceConfig new RequestMetrics awsCredentials GetCredentials AccessKey awsCredentials GetCredentials SecretKey var iamSTSRequestHeaders iamRequest Headers Convert headers to Base64 encoded version var base64EncodedIamRequestHeaders Convert ToBase64String Encoding UTF8 GetBytes JsonConvert SerializeObject iamSTSRequestHeaders IAuthMethodInfo authMethod new IAMAWSAuthMethodInfo roleName roleName requestHeaders base64EncodedIamRequestHeaders var vaultClientSettings new VaultClientSettings vaultAddr authMethod IVaultClient vaultClient new VaultClient vaultClientSettings We can retrieve the secret from the VaultClient object Secret SecretData kv2Secret null kv2Secret vaultClient V1 Secrets KeyValue V2 ReadSecretAsync path creds Result var password kv2Secret Data Data password return password ToString CodeBlockConfig CodeTabs |
vault The GitHub auth method allows authentication with Vault using GitHub page title GitHub Auth Methods The github auth method can be used to authenticate with Vault using a GitHub GitHub auth method layout docs personal access token This method of authentication is most useful for humans | ---
layout: docs
page_title: GitHub - Auth Methods
description: The GitHub auth method allows authentication with Vault using GitHub.
---
# GitHub auth method
The `github` auth method can be used to authenticate with Vault using a GitHub
personal access token. This method of authentication is most useful for humans:
operators or developers using Vault directly via the CLI.
~> **IMPORTANT NOTE:** Vault does not support an OAuth workflow to generate
GitHub tokens, so does not act as a GitHub application. As a result, this method
uses personal access tokens. If the risks below are unacceptable to you, consider
using a different authentication method.
~> Any valid GitHub access token with the `read:org` scope for any user belonging
to the Vault-configured organization can be used for authentication. If such a
token is stolen from a third party service, and the attacker is able to make
network calls to Vault, they will be able to log in as the user that generated
the access token.
~> If the GitHub team is part of an organization with SSO enabled, the user will
need to authorize the personal access token. Failing to do so for SSO users will
result in the personal access token not providing identity information. The token
issued by the auth method will only be assigned the default policy.
## Authentication
### Via the CLI
The default path is `/github`. If this auth method was enabled at a different
path, specify `-path=/my-path` in the CLI.
```shell-session
$ vault login -method=github token="MY_TOKEN"
```
### Via the API
The default endpoint is `auth/github/login`. If this auth method was enabled
at a different path, use that value instead of `github`.
```shell-session
$ curl \
--request POST \
--data '{"token": "MY_TOKEN"}' \
http://127.0.0.1:8200/v1/auth/github/login
```
The response will contain a token at `auth.client_token`:
```json
{
"auth": {
"renewable": true,
"lease_duration": 2764800,
"metadata": {
"username": "my-user",
"org": "my-org"
},
"policies": ["default", "dev-policy"],
"accessor": "f93c4b2d-18b6-2b50-7a32-0fecf88237b8",
"client_token": "1977fceb-3bfa-6c71-4d1f-b64af98ac018"
}
}
```
## Configuration
Auth methods must be configured in advance before users or machines can
authenticate. These steps are usually completed by an operator or configuration
management tool.
1. Enable the GitHub auth method:
```text
$ vault auth enable github
```
1. Use the `/config` endpoint to configure Vault to talk to GitHub.
```text
$ vault write auth/github/config organization=hashicorp
```
For the complete list of configuration options, please see the API
documentation.
1. Map the users/teams of that GitHub organization to policies in Vault. Team
names must be "slugified":
```text
$ vault write auth/github/map/teams/dev value=dev-policy
```
In this example, when members of the team "dev" in the organization
"hashicorp" authenticate to Vault using a GitHub personal access token, they
will be given a token with the "dev-policy" policy attached.
***
You can also create mappings for a specific user `map/users/<user>`
endpoint:
```text
$ vault write auth/github/map/users/sethvargo value=sethvargo-policy
```
In this example, a user with the GitHub username `sethvargo` will be
assigned the `sethvargo-policy` policy **in addition to** any team policies.
## API
The GitHub auth method has a full HTTP API. Please see the
[GitHub Auth API](/vault/api-docs/auth/github) for more
details. | vault | layout docs page title GitHub Auth Methods description The GitHub auth method allows authentication with Vault using GitHub GitHub auth method The github auth method can be used to authenticate with Vault using a GitHub personal access token This method of authentication is most useful for humans operators or developers using Vault directly via the CLI IMPORTANT NOTE Vault does not support an OAuth workflow to generate GitHub tokens so does not act as a GitHub application As a result this method uses personal access tokens If the risks below are unacceptable to you consider using a different authentication method Any valid GitHub access token with the read org scope for any user belonging to the Vault configured organization can be used for authentication If such a token is stolen from a third party service and the attacker is able to make network calls to Vault they will be able to log in as the user that generated the access token If the GitHub team is part of an organization with SSO enabled the user will need to authorize the personal access token Failing to do so for SSO users will result in the personal access token not providing identity information The token issued by the auth method will only be assigned the default policy Authentication Via the CLI The default path is github If this auth method was enabled at a different path specify path my path in the CLI shell session vault login method github token MY TOKEN Via the API The default endpoint is auth github login If this auth method was enabled at a different path use that value instead of github shell session curl request POST data token MY TOKEN http 127 0 0 1 8200 v1 auth github login The response will contain a token at auth client token json auth renewable true lease duration 2764800 metadata username my user org my org policies default dev policy accessor f93c4b2d 18b6 2b50 7a32 0fecf88237b8 client token 1977fceb 3bfa 6c71 4d1f b64af98ac018 Configuration Auth methods must be configured in advance before users or machines can authenticate These steps are usually completed by an operator or configuration management tool 1 Enable the GitHub auth method text vault auth enable github 1 Use the config endpoint to configure Vault to talk to GitHub text vault write auth github config organization hashicorp For the complete list of configuration options please see the API documentation 1 Map the users teams of that GitHub organization to policies in Vault Team names must be slugified text vault write auth github map teams dev value dev policy In this example when members of the team dev in the organization hashicorp authenticate to Vault using a GitHub personal access token they will be given a token with the dev policy policy attached You can also create mappings for a specific user map users user endpoint text vault write auth github map users sethvargo value sethvargo policy In this example a user with the GitHub username sethvargo will be assigned the sethvargo policy policy in addition to any team policies API The GitHub auth method has a full HTTP API Please see the GitHub Auth API vault api docs auth github for more details |
vault page title Kerberos Auth Methods layout docs include x509 sha1 deprecation mdx Kerberos auth method The Kerberos auth method allows automated authentication of Kerberos entities | ---
layout: docs
page_title: Kerberos - Auth Methods
description: The Kerberos auth method allows automated authentication of Kerberos entities.
---
# Kerberos auth method
@include 'x509-sha1-deprecation.mdx'
The `kerberos` auth method provides an automated mechanism to retrieve
a Vault token for Kerberos entities.
[Kerberos](https://web.mit.edu/kerberos/) is a network authentication
protocol invented by MIT in the 1980s. Its name is inspired by Cerberus,
the three-headed hound of Hades from Greek mythology. The three heads
refer to Kerberos' three entities - an authentication server, a ticket
granting server, and a principals database. Kerberos underlies
authentication in Active Directory, and its purpose is to _distribute_
a network's authentication workload.
Vault's Kerberos auth method was originally written by the folks at
[Winton](https://github.com/wintoncode), to whom we owe a special thanks
for both originally building the plugin, and for collaborating to bring
it into HashiCorp's maintenance.
## Prerequisites
Kerberos is a very hands-on auth method. Other auth methods like
[LDAP](/vault/docs/auth/ldap) and
[Azure](/vault/docs/auth/azure) only require
a cursory amount of knowledge for configuration and use.
Kerberos, on the other hand, is best used by people already familiar
with it. We recommend that you use simpler authentication methods if
your use case is achievable through them. If not, we recommend that
before approaching Kerberos, you become familiar with its fundamentals.
- [MicroNugget: How Kerberos Works in Windows Active Directory](https://www.youtube.com/watch?v=kp5d8Yv3-0c)
- [MIT's Kerberos Documentation](https://web.mit.edu/kerberos/)
- [Kerberos: The Definitive Guide](https://www.amazon.com/Kerberos-Definitive-Guide-ebook-dp-B004P1J81C/dp/B004P1J81C/ref=mt_kindle?_encoding=UTF8&me=&qid=1573685442)
Regardless of how you gain your knowledge, before using this auth method,
ensure you are comfortable with Kerberos' high-level architecture, and
ensure you've gone through the exercise of:
- Creating a valid `krb5.conf` file
- Creating a valid `keytab` file
- Authenticating to your domain server with your `keytab` file using `kinit`
With that knowledge in hand, and with an environment that's already tested
and confirmed working, you will be ready to use Kerberos with Vault.
## Configuration
- Enable Kerberos authentication in Vault:
```shell-session
$ vault auth enable \
-passthrough-request-headers=Authorization \
-allowed-response-headers=www-authenticate \
kerberos
```
- Create a `keytab` for the Kerberos plugin (this keytab is used by the Vault server itself, another keytab should be generated for login purposes):
```shell-session
$ ktutil
ktutil: addent -password -p [email protected] -e aes256-cts -k 1
Password for [email protected]:
ktutil: list -e
slot KVNO Principal
---- ---- ---------------------------------------------------------------------
1 1 [email protected] (aes256-cts-hmac-sha1-96)
ktutil: wkt vault.keytab
```
The KVNO (`-k 1`) should match the KVNO of the service account. An error will show in the Vault logs if this is incorrect.
Different encryption types can also be added to the `keytab`, for example `-e rc4-hmac` with additional `addent` commands.
Then base64 encode it:
```shell-session
$ base64 vault.keytab > vault.keytab.base64
```
- Configure the Kerberos auth method with the `keytab` and
entry name that will be used to verify inbound login
requests:
```shell-session
$ vault write auth/kerberos/config \
[email protected] \
service_account="vault_svc"
```
- Configure the Kerberos auth method to communicate with
LDAP using the service account configured above. This is
a sample LDAP configuration. Yours will vary. Ensure you've
first tested your configuration from the Vault server using
a tool like `ldapsearch`.
```shell-session
$ vault write auth/kerberos/config/ldap \
[email protected] \
bindpass=$VAULT_SVC_PASSWORD \
groupattr=sAMAccountName \
groupdn="DC=MATRIX,DC=LAN" \
groupfilter="(&(objectClass=group)(member:1.2.840.113556.1.4.1941:=))" \
userdn="CN=Users,DC=MATRIX,DC=LAN" \
userattr=sAMAccountName \
upndomain=MATRIX.LAN \
url=ldaps://somewhere.foo
```
The LDAP above relies upon the same code as the LDAP auth method.
See [its documentation](/vault/docs/auth/ldap)
for further discussion of available parameters.
- Configure the Vault policies that should be granted to those
who successfully authenticate based on their LDAP group membership.
Since this is identical to the LDAP auth method, see
[Group Membership Resolution](/vault/docs/auth/ldap#group-membership-resolution)
and [LDAP Group -> Policy Mapping](/vault/docs/auth/ldap#ldap-group-policy-mapping)
for further discussion.
```shell-session
$ vault write auth/kerberos/groups/engineering-team \
policies=engineers
```
The above group grants the "engineers" policy to those who authenticate
via Kerberos and are found to be members of the "engineering-team" LDAP
group.
## Authentication
From a client machine with a valid `krb5.conf` and `keytab`, perform a command
like the following:
```shell-session
$ vault login -method=kerberos \
username=grace \
service=HTTP/my-service \
realm=MATRIX.LAN \
keytab_path=/etc/krb5/krb5.keytab \
krb5conf_path=/etc/krb5.conf \
disable_fast_negotiation=false
```
- `krb5conf_path` is the path to a valid `krb5.conf` file describing how to
communicate with the Kerberos environment.
- `keytab_path` is the path to the `keytab` in which the entry lives for the
entity authenticating to Vault. Keytab files should be protected from other
users on a shared server using appropriate file permissions.
- `username` is the username for the entry _within_ the `keytab` to use for
logging into Kerberos. This username must match a service account in LDAP.
- `service` is the service principal name to use in obtaining a service ticket for
gaining a SPNEGO token. This service must exist in LDAP.
- `realm` is the name of the Kerberos realm. This realm must match the UPNDomain
configured on the LDAP connection. This check is case-sensitive.
- `disable_fast_negotiation` is for disabling the Kerberos auth method's default
of using FAST negotiation. FAST is a pre-authentication framework for Kerberos.
It includes a mechanism for tunneling pre-authentication exchanges using armoured
KDC messages. FAST provides increased resistance to passive password guessing attacks.
Some common Kerberos implementations do not support FAST negotiation.
- `remove_instance_name` removes any instance names from a Kerberos service
principal name when parsing the keytab file. For example when this is set to true,
if a keytab has the service principal name `foo/[email protected]`, the CLI
will strip the service principal name to just be `[email protected]`.
## Troubleshooting
### Identify the malfunctioning piece
Once the malfunctioning piece of the journey is identified, you can focus
your debugging efforts in the most useful direction.
1. Use `ldapsearch` while logged into your machine hosting Vault to ensure
your LDAP configuration is functional.
2. Authenticate to your domain server using `kinit`, your `keytab`, and your
`krb5.conf`. Do this with both Vault's `keytab`, and any client `keytab` being
used for logging in. This ensures your Kerberos network is working.
3. While logged into your client machine, verify you can reach Vault
through the following command: `$ curl $VAULT_ADDR/v1/sys/health`.
### Build clear steps to reproduce the problem
If possible, make it easy for someone else to reproduce the problem who
is outside of your company. For instance, if you expect that you should
be able to login using a command like:
```shell-session
$ vault login -method=kerberos \
username=my-name \
service=HTTP/my-service \
realm=EXAMPLE.COM \
keytab_path=/etc/krb5/krb5.keytab \
krb5conf_path=/etc/krb5.conf
```
Then make sure you're ready to share the error output of that command, the
contents of the `krb5.conf` file, and [the entries listed](https://docs.oracle.com/cd/E19683-01/806-4078/6jd6cjs1q/index.html)
in the `keytab` file.
After you've stripped the issue down to its simplest form, if you still
encounter difficulty resolving it, it will be much easier to gain assistance
by posting your reproduction to the [Vault Forum](https://discuss.hashicorp.com/c/vault)
or by providing it to [HashiCorp Support](https://www.hashicorp.com/support)
(if applicable.)
### Additional troubleshooting resources
- [Troubleshooting Vault](/vault/tutorials/monitoring/troubleshooting-vault)
- [The plugin's code](https://github.com/hashicorp/vault-plugin-auth-kerberos)
The Vault Kerberos library has a working integration test environment that
can be referenced as an example of a full Kerberos and LDAP environment.
It runs through Docker and can be started through either one of the following
commands:
```shell-session
$ make integration
$ make dev-env
```
These commands run variations of [a script](https://github.com/hashicorp/vault-plugin-auth-kerberos/blob/master/scripts/integration_env.sh)
that spins up a full environment, adds users, and executes a login from a
client.
## API
The Kerberos auth method has a full HTTP API. Please see the
[Kerberos auth method API](/vault/api-docs/auth/kerberos) for more
details. | vault | layout docs page title Kerberos Auth Methods description The Kerberos auth method allows automated authentication of Kerberos entities Kerberos auth method include x509 sha1 deprecation mdx The kerberos auth method provides an automated mechanism to retrieve a Vault token for Kerberos entities Kerberos https web mit edu kerberos is a network authentication protocol invented by MIT in the 1980s Its name is inspired by Cerberus the three headed hound of Hades from Greek mythology The three heads refer to Kerberos three entities an authentication server a ticket granting server and a principals database Kerberos underlies authentication in Active Directory and its purpose is to distribute a network s authentication workload Vault s Kerberos auth method was originally written by the folks at Winton https github com wintoncode to whom we owe a special thanks for both originally building the plugin and for collaborating to bring it into HashiCorp s maintenance Prerequisites Kerberos is a very hands on auth method Other auth methods like LDAP vault docs auth ldap and Azure vault docs auth azure only require a cursory amount of knowledge for configuration and use Kerberos on the other hand is best used by people already familiar with it We recommend that you use simpler authentication methods if your use case is achievable through them If not we recommend that before approaching Kerberos you become familiar with its fundamentals MicroNugget How Kerberos Works in Windows Active Directory https www youtube com watch v kp5d8Yv3 0c MIT s Kerberos Documentation https web mit edu kerberos Kerberos The Definitive Guide https www amazon com Kerberos Definitive Guide ebook dp B004P1J81C dp B004P1J81C ref mt kindle encoding UTF8 me qid 1573685442 Regardless of how you gain your knowledge before using this auth method ensure you are comfortable with Kerberos high level architecture and ensure you ve gone through the exercise of Creating a valid krb5 conf file Creating a valid keytab file Authenticating to your domain server with your keytab file using kinit With that knowledge in hand and with an environment that s already tested and confirmed working you will be ready to use Kerberos with Vault Configuration Enable Kerberos authentication in Vault shell session vault auth enable passthrough request headers Authorization allowed response headers www authenticate kerberos Create a keytab for the Kerberos plugin this keytab is used by the Vault server itself another keytab should be generated for login purposes shell session ktutil ktutil addent password p your service account REALM COM e aes256 cts k 1 Password for your service account REALM COM ktutil list e slot KVNO Principal 1 1 your service account REALM COM aes256 cts hmac sha1 96 ktutil wkt vault keytab The KVNO k 1 should match the KVNO of the service account An error will show in the Vault logs if this is incorrect Different encryption types can also be added to the keytab for example e rc4 hmac with additional addent commands Then base64 encode it shell session base64 vault keytab vault keytab base64 Configure the Kerberos auth method with the keytab and entry name that will be used to verify inbound login requests shell session vault write auth kerberos config keytab vault keytab base64 service account vault svc Configure the Kerberos auth method to communicate with LDAP using the service account configured above This is a sample LDAP configuration Yours will vary Ensure you ve first tested your configuration from the Vault server using a tool like ldapsearch shell session vault write auth kerberos config ldap binddn vault svc MATRIX LAN bindpass VAULT SVC PASSWORD groupattr sAMAccountName groupdn DC MATRIX DC LAN groupfilter objectClass group member 1 2 840 113556 1 4 1941 userdn CN Users DC MATRIX DC LAN userattr sAMAccountName upndomain MATRIX LAN url ldaps somewhere foo The LDAP above relies upon the same code as the LDAP auth method See its documentation vault docs auth ldap for further discussion of available parameters Configure the Vault policies that should be granted to those who successfully authenticate based on their LDAP group membership Since this is identical to the LDAP auth method see Group Membership Resolution vault docs auth ldap group membership resolution and LDAP Group Policy Mapping vault docs auth ldap ldap group policy mapping for further discussion shell session vault write auth kerberos groups engineering team policies engineers The above group grants the engineers policy to those who authenticate via Kerberos and are found to be members of the engineering team LDAP group Authentication From a client machine with a valid krb5 conf and keytab perform a command like the following shell session vault login method kerberos username grace service HTTP my service realm MATRIX LAN keytab path etc krb5 krb5 keytab krb5conf path etc krb5 conf disable fast negotiation false krb5conf path is the path to a valid krb5 conf file describing how to communicate with the Kerberos environment keytab path is the path to the keytab in which the entry lives for the entity authenticating to Vault Keytab files should be protected from other users on a shared server using appropriate file permissions username is the username for the entry within the keytab to use for logging into Kerberos This username must match a service account in LDAP service is the service principal name to use in obtaining a service ticket for gaining a SPNEGO token This service must exist in LDAP realm is the name of the Kerberos realm This realm must match the UPNDomain configured on the LDAP connection This check is case sensitive disable fast negotiation is for disabling the Kerberos auth method s default of using FAST negotiation FAST is a pre authentication framework for Kerberos It includes a mechanism for tunneling pre authentication exchanges using armoured KDC messages FAST provides increased resistance to passive password guessing attacks Some common Kerberos implementations do not support FAST negotiation remove instance name removes any instance names from a Kerberos service principal name when parsing the keytab file For example when this is set to true if a keytab has the service principal name foo localhost example com the CLI will strip the service principal name to just be foo example com Troubleshooting Identify the malfunctioning piece Once the malfunctioning piece of the journey is identified you can focus your debugging efforts in the most useful direction 1 Use ldapsearch while logged into your machine hosting Vault to ensure your LDAP configuration is functional 2 Authenticate to your domain server using kinit your keytab and your krb5 conf Do this with both Vault s keytab and any client keytab being used for logging in This ensures your Kerberos network is working 3 While logged into your client machine verify you can reach Vault through the following command curl VAULT ADDR v1 sys health Build clear steps to reproduce the problem If possible make it easy for someone else to reproduce the problem who is outside of your company For instance if you expect that you should be able to login using a command like shell session vault login method kerberos username my name service HTTP my service realm EXAMPLE COM keytab path etc krb5 krb5 keytab krb5conf path etc krb5 conf Then make sure you re ready to share the error output of that command the contents of the krb5 conf file and the entries listed https docs oracle com cd E19683 01 806 4078 6jd6cjs1q index html in the keytab file After you ve stripped the issue down to its simplest form if you still encounter difficulty resolving it it will be much easier to gain assistance by posting your reproduction to the Vault Forum https discuss hashicorp com c vault or by providing it to HashiCorp Support https www hashicorp com support if applicable Additional troubleshooting resources Troubleshooting Vault vault tutorials monitoring troubleshooting vault The plugin s code https github com hashicorp vault plugin auth kerberos The Vault Kerberos library has a working integration test environment that can be referenced as an example of a full Kerberos and LDAP environment It runs through Docker and can be started through either one of the following commands shell session make integration make dev env These commands run variations of a script https github com hashicorp vault plugin auth kerberos blob master scripts integration env sh that spins up a full environment adds users and executes a login from a client API The Kerberos auth method has a full HTTP API Please see the Kerberos auth method API vault api docs auth kerberos for more details |
vault Kubernetes auth method The Kubernetes auth method allows automated authentication of Kubernetes layout docs Service Accounts page title Kubernetes Auth Methods | ---
layout: docs
page_title: Kubernetes - Auth Methods
description: |-
The Kubernetes auth method allows automated authentication of Kubernetes
Service Accounts.
---
# Kubernetes auth method
@include 'x509-sha1-deprecation.mdx'
The `kubernetes` auth method can be used to authenticate with Vault using a
Kubernetes Service Account Token. This method of authentication makes it easy to
introduce a Vault token into a Kubernetes Pod.
You can also use a Kubernetes Service Account Token to [log in via JWT auth][k8s-jwt-auth].
See the section on [How to work with short-lived Kubernetes tokens][short-lived-tokens]
for a summary of why you might want to use JWT auth instead and how it compares to
Kubernetes auth.
-> **Note:** If you are upgrading to Kubernetes v1.21+, ensure the config option
`disable_iss_validation` is set to true. Assuming the default mount path, you
can check with `vault read -field disable_iss_validation auth/kubernetes/config`.
See [Kubernetes 1.21](#kubernetes-1-21) below for more details.
## Authentication
### Via the CLI
The default path is `/kubernetes`. If this auth method was enabled at a
different path, specify `-path=/my-path` in the CLI.
```shell-session
$ vault write auth/kubernetes/login role=demo jwt=...
```
### Via the API
The default endpoint is `auth/kubernetes/login`. If this auth method was enabled
at a different path, use that value instead of `kubernetes`.
```shell-session
$ curl \
--request POST \
--data '{"jwt": "<your service account jwt>", "role": "demo"}' \
http://127.0.0.1:8200/v1/auth/kubernetes/login
```
The response will contain a token at `auth.client_token`:
```json
{
"auth": {
"client_token": "38fe9691-e623-7238-f618-c94d4e7bc674",
"accessor": "78e87a38-84ed-2692-538f-ca8b9f400ab3",
"policies": ["default"],
"metadata": {
"role": "demo",
"service_account_name": "myapp",
"service_account_namespace": "default",
"service_account_secret_name": "myapp-token-pd21c",
"service_account_uid": "aa9aa8ff-98d0-11e7-9bb7-0800276d99bf"
},
"lease_duration": 2764800,
"renewable": true
}
}
```
## Configuration
Auth methods must be configured in advance before users or machines can
authenticate. These steps are usually completed by an operator or configuration
management tool.
1. Enable the Kubernetes auth method:
```shell-session
$ vault auth enable kubernetes
```
1. Use the `/config` endpoint to configure Vault to talk to Kubernetes. Use
`kubectl cluster-info` to validate the Kubernetes host address and TCP port.
For the list of available configuration options, please see the
[API documentation](/vault/api-docs/auth/kubernetes).
```shell-session
$ vault write auth/kubernetes/config \
token_reviewer_jwt="<your reviewer service account JWT>" \
kubernetes_host=https://192.168.99.100:<your TCP port or blank for 443> \
[email protected]
```
!> **Note:** The pattern Vault uses to authenticate Pods depends on sharing
the JWT token over the network. Given the [security model of
Vault](/vault/docs/internals/security), this is allowable because Vault is
part of the trusted compute base. In general, Kubernetes applications should
**not** share this JWT with other applications, as it allows API calls to be
made on behalf of the Pod and can result in unintended access being granted
to 3rd parties.
1. Create a named role:
```shell-session
$ vault write auth/kubernetes/role/demo \
bound_service_account_names=myapp \
bound_service_account_namespaces=default \
policies=default \
ttl=1h
```
This role authorizes the "myapp" service account in the default
namespace and it gives it the default policy.
For the complete list of configuration options, please see the [API
documentation](/vault/api-docs/auth/kubernetes).
## Kubernetes 1.21
Starting in version [1.21][k8s-1.21-changelog], the Kubernetes
`BoundServiceAccountTokenVolume` feature defaults to enabled. This changes the
JWT token mounted into containers by default in two ways that are important for
Kubernetes auth:
* It has an expiry time and is bound to the lifetime of the pod and service account.
* The value of the JWT's `"iss"` claim depends on the cluster's configuration.
The changes to token lifetime are important when configuring the
[`token_reviewer_jwt`](/vault/api-docs/auth/kubernetes#token_reviewer_jwt) option.
If a short-lived token is used,
Kubernetes will revoke it as soon as the pod or service account are deleted, or
if the expiry time passes, and Vault will no longer be able to use the
`TokenReview` API. See [How to work with short-lived Kubernetes tokens][short-lived-tokens]
below for details on handling this change.
In response to the issuer changes, Kubernetes auth has been updated in Vault
1.9.0 to not validate the issuer by default. The Kubernetes API does the same
validation when reviewing tokens, so enabling issuer validation on the Vault
side is duplicated work. Without disabling Vault's issuer validation, it is not
possible for a single Kubernetes auth configuration to work for default mounted
pod tokens with both Kubernetes 1.20 and 1.21. Note that auth mounts created
before Vault 1.9 will maintain the old default, and you will need to explicitly
set `disable_iss_validation=true` before upgrading Kubernetes to 1.21. See
[Discovering the service account `issuer`](#discovering-the-service-account-issuer)
below for guidance if you wish to enable issuer validation in Vault.
[k8s-1.21-changelog]: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#api-change-2
[short-lived-tokens]: #how-to-work-with-short-lived-kubernetes-tokens
### How to work with short-lived kubernetes tokens
There are a few different ways to configure auth for Kubernetes pods when
default mounted pod tokens are short-lived, each with their own tradeoffs. This
table summarizes the options, each of which is explained in more detail below.
| Option | All tokens are short-lived | Can revoke tokens early | Other considerations |
| ------------------------------------ | -------------------------- | ----------------------- | -------------------- |
| Use local token as reviewer JWT | Yes | Yes | Requires Vault (1.9.3+) to be deployed on the Kubernetes cluster |
| Use client JWT as reviewer JWT | Yes | Yes | Operational overhead |
| Use long-lived token as reviewer JWT | No | Yes | |
| Use JWT auth instead | Yes | No | |
-> **Note:** By default, Kubernetes currently extends the lifetime of admission
injected service account tokens to a year to help smooth the transition to
short-lived tokens. If you would like to disable this, set
[--service-account-extend-token-expiration=false][k8s-extended-tokens] for
`kube-apiserver` or specify your own `serviceAccountToken` volume mount. See
[here](/vault/docs/auth/jwt/oidc-providers/kubernetes#specify-ttl-and-audience) for an example.
[k8s-extended-tokens]: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/#options
#### Use local service account token as the reviewer JWT
When running Vault in a Kubernetes pod the recommended option is to use the pod's local
service account token. Vault will periodically re-read the file to support
short-lived tokens. To use the local token and CA certificate, omit
`token_reviewer_jwt` and `kubernetes_ca_cert` when configuring the auth method.
Vault will attempt to load them from `token` and `ca.crt` respectively inside
the default mount folder `/var/run/secrets/kubernetes.io/serviceaccount/`.
```shell-session
$ vault write auth/kubernetes/config \
kubernetes_host=https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT
```
!> **Note:** Requires Vault 1.9.3+. In earlier versions the service account
token and CA certificate is read once and stored in Vault storage.
When the service account token expires or is revoked, Vault will no longer be
able to use the `TokenReview` API and client authentication will fail.
<Tip title="You can use the trust store for CA certificates">
If you leave `kubernetes_ca_cert` unset and set `disable_local_ca_jwt` to
`true`, Vault uses the system's trust store for TLS certificate
verification.
</Tip>
#### Use the Vault client's JWT as the reviewer JWT
When configuring Kubernetes auth, you can omit the `token_reviewer_jwt`, and Vault
will use the Vault client's JWT as its own auth token when communicating with
the Kubernetes `TokenReview` API. If Vault is running in Kubernetes, you also need
to set `disable_local_ca_jwt=true`.
This means Vault does not store any JWTs and allows you to use short-lived tokens
everywhere but adds some operational overhead to maintain the cluster role
bindings on the set of service accounts you want to be able to authenticate with
Vault. Each client of Vault would need the `system:auth-delegator` ClusterRole:
```shell-session
$ kubectl create clusterrolebinding vault-client-auth-delegator \
--clusterrole=system:auth-delegator \
--group=group1 \
--serviceaccount=default:svcaccount1 \
...
```
#### Continue using long-lived tokens
You can create a long-lived secret using the instructions [here][k8s-create-secret]
and use that as the `token_reviewer_jwt`. In this example, the `vault` service
account would need the `system:auth-delegator` ClusterRole:
```shell-session
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: vault-k8s-auth-secret
annotations:
kubernetes.io/service-account.name: vault
type: kubernetes.io/service-account-token
EOF
```
Using this maintains previous workflows but does not benefit from the improved
security posture of short-lived tokens.
[k8s-create-secret]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token
#### Use JWT auth
Kubernetes auth is specialized to use Kubernetes' `TokenReview` API. However, the
JWT tokens Kubernetes generates can also be verified using Kubernetes as an OIDC
provider. The JWT auth method documentation has [instructions][k8s-jwt-auth] for
setting up JWT auth with Kubernetes as the OIDC provider.
[k8s-jwt-auth]: /vault/docs/auth/jwt/oidc-providers/kubernetes
This solution allows you to use short-lived tokens for all clients and removes
the need for a reviewer JWT. However, the client tokens cannot be revoked before
their TTL expires, so it is recommended to keep the TTL short with that
limitation in mind.
### Discovering the service account `issuer`
-> **Note:** From Vault 1.9.0, `disable_iss_validation` and `issuer` are deprecated
and the default for `disable_iss_validation` has changed to `true` for new
Kubernetes auth mounts. The following section only applies if you have set
`disable_iss_validation=false` or created your mount before 1.9 with the default
value, but `disable_iss_validation=true` is the new recommended value for all
versions of Vault.
Kubernetes 1.21+ clusters may require setting the service account
[`issuer`](/vault/api-docs/auth/kubernetes#issuer) to the same value as
`kube-apiserver`'s `--service-account-issuer` flag. This is because the service
account JWTs for these clusters may have an issuer specific to the cluster
itself, instead of the old default of `kubernetes/serviceaccount`. If you are
unable to check this value directly, you can run the following and look for the
`"iss"` field to find the required value:
```shell-session
$ echo '{"apiVersion": "authentication.k8s.io/v1", "kind": "TokenRequest"}' \
| kubectl create -f- --raw /api/v1/namespaces/default/serviceaccounts/default/token \
| jq -r '.status.token' \
| cut -d . -f2 \
| base64 -d
```
Most clusters will also have that information available at the
`.well-known/openid-configuration` endpoint:
```shell-session
$ kubectl get --raw /.well-known/openid-configuration | jq -r .issuer
```
This value is then used when configuring Kubernetes auth, e.g.:
```shell-session
$ vault write auth/kubernetes/config \
kubernetes_host="https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT" \
issuer="\"test-aks-cluster-dns-d6cbb78e.hcp.uksouth.azmk8s.io\""
```
## Configuring kubernetes
This auth method accesses the [Kubernetes TokenReview API][k8s-tokenreview] to
validate the provided JWT is still valid. Kubernetes should be running with
`--service-account-lookup`. This is defaulted to true from Kubernetes 1.7.
Otherwise deleted tokens in Kubernetes will not be properly revoked and
will be able to authenticate to this auth method.
Service Accounts used in this auth method will need to have access to the
TokenReview API. If Kubernetes is configured to use RBAC roles, the Service
Account should be granted permissions to access this API. The following
example ClusterRoleBinding could be used to grant these permissions:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: role-tokenreview-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault-auth
namespace: default
```
## API
The Kubernetes Auth Plugin has a full HTTP API. Please see the
[API docs](/vault/api-docs/auth/kubernetes) for more details.
[k8s-tokenreview]: https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-review-v1/
## Workflows
Refer to the following workflow examples for Kubernetes auth method usage:
### Working with templated policies
Set `use_annotations_as_alias_metadata=true` in your Kubernetes auth
configuration to use Kubernetes Service Account annotations for
[Vault alias](/vault/docs/concepts/identity#entities-and-aliases) metadata.
When `use_annotations_as_alias_metadata` is true, you can use the
`identity.entity.aliases.<mount accessor>.metadata.<metadata key>` template
parameter when you create [templated policies](/vault/docs/concepts/policies#templated-policies).
To use annotations as alias metadata, you must give Vault permission to read
service accounts from the Kubernetes API.
#### Scenario Introduction
Assume you have the following policy requirement:
Applications can perform read operations on their allocated key/value secret path:
`(env-kv/data/<env>)`
#### Annotate Kubernetes Service Accounts with their dedicated secret paths
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: app
namespace: demo
annotations:
vault.hashicorp.com/alias-metadata-env: demo/app
```
When the application, `app`, logs in with JWT for the service account, Vault
renders the alias metadata as `env : demo/app`.
#### Create a templated ACL policy
The `env-tmpl` policy lets applications read their secrets defined in KV v2
secret engine. Use the mount accessor value
(`auth_kubernetes_bcecb1e1`) from the [`sys/auth`](/vault/api-docs/system/auth) endpoint or the [`vault auth list`](/vault/docs/commands/auth/list) command.
```shell-session
$ tee env-tmpl.hcl <<EOF
path "env-kv/data/" {
capabilities = [ "read" ]
}
EOF
$ vault policy write env-tmpl env-tmpl.hcl
```
#### Create a Kubernetes role with the templated ACL policy
The Kubernetes role lets users login as the `env-reader` role to read from the
secret path described in the `env-tmpl` policy.
```shell-session
$ vault write auth/kubernetes/role/env-reader \
bound_service_account_names=app \
bound_service_account_namespaces=demo \
policies=default,env-tmpl \
ttl=1h
```
## Code example
The following example demonstrates the Kubernetes auth method to authenticate
with Vault.
<CodeTabs>
<CodeBlockConfig>
```go
package main
import (
"fmt"
"os"
vault "github.com/hashicorp/vault/api"
auth "github.com/hashicorp/vault/api/auth/kubernetes"
)
// Fetches a key-value secret (kv-v2) after authenticating to Vault with a Kubernetes service account.
// For a more in-depth setup explanation, please see the relevant readme in the hashicorp/vault-examples repo.
func getSecretWithKubernetesAuth() (string, error) {
// If set, the VAULT_ADDR environment variable will be the address that
// your pod uses to communicate with Vault.
config := vault.DefaultConfig() // modify for more granular configuration
client, err := vault.NewClient(config)
if err != nil {
return "", fmt.Errorf("unable to initialize Vault client: %w", err)
}
// The service-account token will be read from the path where the token's
// Kubernetes Secret is mounted. By default, Kubernetes will mount it to
// /var/run/secrets/kubernetes.io/serviceaccount/token, but an administrator
// may have configured it to be mounted elsewhere.
// In that case, we'll use the option WithServiceAccountTokenPath to look
// for the token there.
k8sAuth, err := auth.NewKubernetesAuth(
"dev-role-k8s",
auth.WithServiceAccountTokenPath("path/to/service-account-token"),
)
if err != nil {
return "", fmt.Errorf("unable to initialize Kubernetes auth method: %w", err)
}
authInfo, err := client.Auth().Login(context.TODO(), k8sAuth)
if err != nil {
return "", fmt.Errorf("unable to log in with Kubernetes auth: %w", err)
}
if authInfo == nil {
return "", fmt.Errorf("no auth info was returned after login")
}
// get secret from Vault, from the default mount path for KV v2 in dev mode, "secret"
secret, err := client.KVv2("secret").Get(context.Background(), "creds")
if err != nil {
return "", fmt.Errorf("unable to read secret: %w", err)
}
// data map can contain more than one key-value pair,
// in this case we're just grabbing one of them
value, ok := secret.Data["password"].(string)
if !ok {
return "", fmt.Errorf("value type assertion failed: %T %#v", secret.Data["password"], secret.Data["password"])
}
return value, nil
}
```
</CodeBlockConfig>
<CodeBlockConfig>
```cs
using System;
using System.IO;
using VaultSharp;
using VaultSharp.V1.AuthMethods;
using VaultSharp.V1.AuthMethods.Kubernetes;
using VaultSharp.V1.Commons;
namespace Examples
{
public class KubernetesAuthExample
{
const string DefaultTokenPath = "path/to/service-account-token";
// Fetches a key-value secret (kv-v2) after authenticating to Vault with a Kubernetes service account.
// For a more in-depth setup explanation, please see the relevant readme in the hashicorp/vault-examples repo.
public string GetSecretWithK8s()
{
var vaultAddr = Environment.GetEnvironmentVariable("VAULT_ADDR");
if(String.IsNullOrEmpty(vaultAddr))
{
throw new System.ArgumentNullException("Vault Address");
}
var roleName = Environment.GetEnvironmentVariable("VAULT_ROLE");
if(String.IsNullOrEmpty(roleName))
{
throw new System.ArgumentNullException("Vault Role Name");
}
// Get the path to service account token or fall back on default path
string pathToToken = String.IsNullOrEmpty(Environment.GetEnvironmentVariable("SA_TOKEN_PATH")) ? DefaultTokenPath : Environment.GetEnvironmentVariable("SA_TOKEN_PATH");
string jwt = File.ReadAllText(pathToToken);
IAuthMethodInfo authMethod = new KubernetesAuthMethodInfo(roleName, jwt);
var vaultClientSettings = new VaultClientSettings(vaultAddr, authMethod);
IVaultClient vaultClient = new VaultClient(vaultClientSettings);
// We can retrieve the secret after creating our VaultClient object
Secret<SecretData> kv2Secret = null;
kv2Secret = vaultClient.V1.Secrets.KeyValue.V2.ReadSecretAsync(path: "/creds").Result;
var password = kv2Secret.Data.Data["password"];
return password.ToString();
}
}
}
```
</CodeBlockConfig>
</CodeTabs> | vault | layout docs page title Kubernetes Auth Methods description The Kubernetes auth method allows automated authentication of Kubernetes Service Accounts Kubernetes auth method include x509 sha1 deprecation mdx The kubernetes auth method can be used to authenticate with Vault using a Kubernetes Service Account Token This method of authentication makes it easy to introduce a Vault token into a Kubernetes Pod You can also use a Kubernetes Service Account Token to log in via JWT auth k8s jwt auth See the section on How to work with short lived Kubernetes tokens short lived tokens for a summary of why you might want to use JWT auth instead and how it compares to Kubernetes auth Note If you are upgrading to Kubernetes v1 21 ensure the config option disable iss validation is set to true Assuming the default mount path you can check with vault read field disable iss validation auth kubernetes config See Kubernetes 1 21 kubernetes 1 21 below for more details Authentication Via the CLI The default path is kubernetes If this auth method was enabled at a different path specify path my path in the CLI shell session vault write auth kubernetes login role demo jwt Via the API The default endpoint is auth kubernetes login If this auth method was enabled at a different path use that value instead of kubernetes shell session curl request POST data jwt your service account jwt role demo http 127 0 0 1 8200 v1 auth kubernetes login The response will contain a token at auth client token json auth client token 38fe9691 e623 7238 f618 c94d4e7bc674 accessor 78e87a38 84ed 2692 538f ca8b9f400ab3 policies default metadata role demo service account name myapp service account namespace default service account secret name myapp token pd21c service account uid aa9aa8ff 98d0 11e7 9bb7 0800276d99bf lease duration 2764800 renewable true Configuration Auth methods must be configured in advance before users or machines can authenticate These steps are usually completed by an operator or configuration management tool 1 Enable the Kubernetes auth method shell session vault auth enable kubernetes 1 Use the config endpoint to configure Vault to talk to Kubernetes Use kubectl cluster info to validate the Kubernetes host address and TCP port For the list of available configuration options please see the API documentation vault api docs auth kubernetes shell session vault write auth kubernetes config token reviewer jwt your reviewer service account JWT kubernetes host https 192 168 99 100 your TCP port or blank for 443 kubernetes ca cert ca crt Note The pattern Vault uses to authenticate Pods depends on sharing the JWT token over the network Given the security model of Vault vault docs internals security this is allowable because Vault is part of the trusted compute base In general Kubernetes applications should not share this JWT with other applications as it allows API calls to be made on behalf of the Pod and can result in unintended access being granted to 3rd parties 1 Create a named role shell session vault write auth kubernetes role demo bound service account names myapp bound service account namespaces default policies default ttl 1h This role authorizes the myapp service account in the default namespace and it gives it the default policy For the complete list of configuration options please see the API documentation vault api docs auth kubernetes Kubernetes 1 21 Starting in version 1 21 k8s 1 21 changelog the Kubernetes BoundServiceAccountTokenVolume feature defaults to enabled This changes the JWT token mounted into containers by default in two ways that are important for Kubernetes auth It has an expiry time and is bound to the lifetime of the pod and service account The value of the JWT s iss claim depends on the cluster s configuration The changes to token lifetime are important when configuring the token reviewer jwt vault api docs auth kubernetes token reviewer jwt option If a short lived token is used Kubernetes will revoke it as soon as the pod or service account are deleted or if the expiry time passes and Vault will no longer be able to use the TokenReview API See How to work with short lived Kubernetes tokens short lived tokens below for details on handling this change In response to the issuer changes Kubernetes auth has been updated in Vault 1 9 0 to not validate the issuer by default The Kubernetes API does the same validation when reviewing tokens so enabling issuer validation on the Vault side is duplicated work Without disabling Vault s issuer validation it is not possible for a single Kubernetes auth configuration to work for default mounted pod tokens with both Kubernetes 1 20 and 1 21 Note that auth mounts created before Vault 1 9 will maintain the old default and you will need to explicitly set disable iss validation true before upgrading Kubernetes to 1 21 See Discovering the service account issuer discovering the service account issuer below for guidance if you wish to enable issuer validation in Vault k8s 1 21 changelog https github com kubernetes kubernetes blob master CHANGELOG CHANGELOG 1 21 md api change 2 short lived tokens how to work with short lived kubernetes tokens How to work with short lived kubernetes tokens There are a few different ways to configure auth for Kubernetes pods when default mounted pod tokens are short lived each with their own tradeoffs This table summarizes the options each of which is explained in more detail below Option All tokens are short lived Can revoke tokens early Other considerations Use local token as reviewer JWT Yes Yes Requires Vault 1 9 3 to be deployed on the Kubernetes cluster Use client JWT as reviewer JWT Yes Yes Operational overhead Use long lived token as reviewer JWT No Yes Use JWT auth instead Yes No Note By default Kubernetes currently extends the lifetime of admission injected service account tokens to a year to help smooth the transition to short lived tokens If you would like to disable this set service account extend token expiration false k8s extended tokens for kube apiserver or specify your own serviceAccountToken volume mount See here vault docs auth jwt oidc providers kubernetes specify ttl and audience for an example k8s extended tokens https kubernetes io docs reference command line tools reference kube apiserver options Use local service account token as the reviewer JWT When running Vault in a Kubernetes pod the recommended option is to use the pod s local service account token Vault will periodically re read the file to support short lived tokens To use the local token and CA certificate omit token reviewer jwt and kubernetes ca cert when configuring the auth method Vault will attempt to load them from token and ca crt respectively inside the default mount folder var run secrets kubernetes io serviceaccount shell session vault write auth kubernetes config kubernetes host https KUBERNETES SERVICE HOST KUBERNETES SERVICE PORT Note Requires Vault 1 9 3 In earlier versions the service account token and CA certificate is read once and stored in Vault storage When the service account token expires or is revoked Vault will no longer be able to use the TokenReview API and client authentication will fail Tip title You can use the trust store for CA certificates If you leave kubernetes ca cert unset and set disable local ca jwt to true Vault uses the system s trust store for TLS certificate verification Tip Use the Vault client s JWT as the reviewer JWT When configuring Kubernetes auth you can omit the token reviewer jwt and Vault will use the Vault client s JWT as its own auth token when communicating with the Kubernetes TokenReview API If Vault is running in Kubernetes you also need to set disable local ca jwt true This means Vault does not store any JWTs and allows you to use short lived tokens everywhere but adds some operational overhead to maintain the cluster role bindings on the set of service accounts you want to be able to authenticate with Vault Each client of Vault would need the system auth delegator ClusterRole shell session kubectl create clusterrolebinding vault client auth delegator clusterrole system auth delegator group group1 serviceaccount default svcaccount1 Continue using long lived tokens You can create a long lived secret using the instructions here k8s create secret and use that as the token reviewer jwt In this example the vault service account would need the system auth delegator ClusterRole shell session kubectl apply f EOF apiVersion v1 kind Secret metadata name vault k8s auth secret annotations kubernetes io service account name vault type kubernetes io service account token EOF Using this maintains previous workflows but does not benefit from the improved security posture of short lived tokens k8s create secret https kubernetes io docs tasks configure pod container configure service account manually create a service account api token Use JWT auth Kubernetes auth is specialized to use Kubernetes TokenReview API However the JWT tokens Kubernetes generates can also be verified using Kubernetes as an OIDC provider The JWT auth method documentation has instructions k8s jwt auth for setting up JWT auth with Kubernetes as the OIDC provider k8s jwt auth vault docs auth jwt oidc providers kubernetes This solution allows you to use short lived tokens for all clients and removes the need for a reviewer JWT However the client tokens cannot be revoked before their TTL expires so it is recommended to keep the TTL short with that limitation in mind Discovering the service account issuer Note From Vault 1 9 0 disable iss validation and issuer are deprecated and the default for disable iss validation has changed to true for new Kubernetes auth mounts The following section only applies if you have set disable iss validation false or created your mount before 1 9 with the default value but disable iss validation true is the new recommended value for all versions of Vault Kubernetes 1 21 clusters may require setting the service account issuer vault api docs auth kubernetes issuer to the same value as kube apiserver s service account issuer flag This is because the service account JWTs for these clusters may have an issuer specific to the cluster itself instead of the old default of kubernetes serviceaccount If you are unable to check this value directly you can run the following and look for the iss field to find the required value shell session echo apiVersion authentication k8s io v1 kind TokenRequest kubectl create f raw api v1 namespaces default serviceaccounts default token jq r status token cut d f2 base64 d Most clusters will also have that information available at the well known openid configuration endpoint shell session kubectl get raw well known openid configuration jq r issuer This value is then used when configuring Kubernetes auth e g shell session vault write auth kubernetes config kubernetes host https KUBERNETES SERVICE HOST KUBERNETES SERVICE PORT issuer test aks cluster dns d6cbb78e hcp uksouth azmk8s io Configuring kubernetes This auth method accesses the Kubernetes TokenReview API k8s tokenreview to validate the provided JWT is still valid Kubernetes should be running with service account lookup This is defaulted to true from Kubernetes 1 7 Otherwise deleted tokens in Kubernetes will not be properly revoked and will be able to authenticate to this auth method Service Accounts used in this auth method will need to have access to the TokenReview API If Kubernetes is configured to use RBAC roles the Service Account should be granted permissions to access this API The following example ClusterRoleBinding could be used to grant these permissions yaml apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name role tokenreview binding namespace default roleRef apiGroup rbac authorization k8s io kind ClusterRole name system auth delegator subjects kind ServiceAccount name vault auth namespace default API The Kubernetes Auth Plugin has a full HTTP API Please see the API docs vault api docs auth kubernetes for more details k8s tokenreview https kubernetes io docs reference kubernetes api authentication resources token review v1 Workflows Refer to the following workflow examples for Kubernetes auth method usage Working with templated policies Set use annotations as alias metadata true in your Kubernetes auth configuration to use Kubernetes Service Account annotations for Vault alias vault docs concepts identity entities and aliases metadata When use annotations as alias metadata is true you can use the identity entity aliases mount accessor metadata metadata key template parameter when you create templated policies vault docs concepts policies templated policies To use annotations as alias metadata you must give Vault permission to read service accounts from the Kubernetes API Scenario Introduction Assume you have the following policy requirement Applications can perform read operations on their allocated key value secret path env kv data env Annotate Kubernetes Service Accounts with their dedicated secret paths yaml apiVersion v1 kind ServiceAccount metadata name app namespace demo annotations vault hashicorp com alias metadata env demo app When the application app logs in with JWT for the service account Vault renders the alias metadata as env demo app Create a templated ACL policy The env tmpl policy lets applications read their secrets defined in KV v2 secret engine Use the mount accessor value auth kubernetes bcecb1e1 from the sys auth vault api docs system auth endpoint or the vault auth list vault docs commands auth list command shell session tee env tmpl hcl EOF path env kv data capabilities read EOF vault policy write env tmpl env tmpl hcl Create a Kubernetes role with the templated ACL policy The Kubernetes role lets users login as the env reader role to read from the secret path described in the env tmpl policy shell session vault write auth kubernetes role env reader bound service account names app bound service account namespaces demo policies default env tmpl ttl 1h Code example The following example demonstrates the Kubernetes auth method to authenticate with Vault CodeTabs CodeBlockConfig go package main import fmt os vault github com hashicorp vault api auth github com hashicorp vault api auth kubernetes Fetches a key value secret kv v2 after authenticating to Vault with a Kubernetes service account For a more in depth setup explanation please see the relevant readme in the hashicorp vault examples repo func getSecretWithKubernetesAuth string error If set the VAULT ADDR environment variable will be the address that your pod uses to communicate with Vault config vault DefaultConfig modify for more granular configuration client err vault NewClient config if err nil return fmt Errorf unable to initialize Vault client w err The service account token will be read from the path where the token s Kubernetes Secret is mounted By default Kubernetes will mount it to var run secrets kubernetes io serviceaccount token but an administrator may have configured it to be mounted elsewhere In that case we ll use the option WithServiceAccountTokenPath to look for the token there k8sAuth err auth NewKubernetesAuth dev role k8s auth WithServiceAccountTokenPath path to service account token if err nil return fmt Errorf unable to initialize Kubernetes auth method w err authInfo err client Auth Login context TODO k8sAuth if err nil return fmt Errorf unable to log in with Kubernetes auth w err if authInfo nil return fmt Errorf no auth info was returned after login get secret from Vault from the default mount path for KV v2 in dev mode secret secret err client KVv2 secret Get context Background creds if err nil return fmt Errorf unable to read secret w err data map can contain more than one key value pair in this case we re just grabbing one of them value ok secret Data password string if ok return fmt Errorf value type assertion failed T v secret Data password secret Data password return value nil CodeBlockConfig CodeBlockConfig cs using System using System IO using VaultSharp using VaultSharp V1 AuthMethods using VaultSharp V1 AuthMethods Kubernetes using VaultSharp V1 Commons namespace Examples public class KubernetesAuthExample const string DefaultTokenPath path to service account token Fetches a key value secret kv v2 after authenticating to Vault with a Kubernetes service account For a more in depth setup explanation please see the relevant readme in the hashicorp vault examples repo public string GetSecretWithK8s var vaultAddr Environment GetEnvironmentVariable VAULT ADDR if String IsNullOrEmpty vaultAddr throw new System ArgumentNullException Vault Address var roleName Environment GetEnvironmentVariable VAULT ROLE if String IsNullOrEmpty roleName throw new System ArgumentNullException Vault Role Name Get the path to service account token or fall back on default path string pathToToken String IsNullOrEmpty Environment GetEnvironmentVariable SA TOKEN PATH DefaultTokenPath Environment GetEnvironmentVariable SA TOKEN PATH string jwt File ReadAllText pathToToken IAuthMethodInfo authMethod new KubernetesAuthMethodInfo roleName jwt var vaultClientSettings new VaultClientSettings vaultAddr authMethod IVaultClient vaultClient new VaultClient vaultClientSettings We can retrieve the secret after creating our VaultClient object Secret SecretData kv2Secret null kv2Secret vaultClient V1 Secrets KeyValue V2 ReadSecretAsync path creds Result var password kv2Secret Data Data password return password ToString CodeBlockConfig CodeTabs |
vault client certificates The cert auth method allows users to authenticate with Vault using TLS page title TLS Certificates Auth Methods layout docs TLS certificates auth method | ---
layout: docs
page_title: TLS Certificates - Auth Methods
description: >-
The "cert" auth method allows users to authenticate with Vault using TLS
client certificates.
---
# TLS certificates auth method
@include 'x509-sha1-deprecation.mdx'
The `cert` auth method allows authentication using SSL/TLS client certificates
which are either signed by a CA or self-signed. SSL/TLS client certificates
are defined as having an `ExtKeyUsage` extension with the usage set to either
`ClientAuth` or `Any`.
The trusted certificates and CAs are configured directly to the auth method
using the `certs/` path. This method cannot read trusted certificates from an
external source.
CA certificates are associated with a role; role names and CRL names are normalized to
lower-case.
Please note that to use this auth method, `tls_disable` and `tls_disable_client_certs` must be false in the Vault
configuration. This is because the certificates are sent through TLS communication itself.
## Revocation checking
Since Vault 0.4, the method supports revocation checking.
An authorized user can submit PEM-formatted CRLs identified by a given name;
these can be updated or deleted at will. They may also set the URL of a
trusted CRL distribution point, and have Vault fetch the CRL as needed.
When there are CRLs present, at the time of client authentication:
- If the client presents any chain where no certificate in the chain matches a
revoked serial number, authentication is allowed
- If there is no chain presented by the client without a revoked serial number,
authentication is denied
This method provides good security while also allowing for flexibility. For
instance, if an intermediate CA is going to be retired, a client can be
configured with two certificate chains: one that contains the initial
intermediate CA in the path, and the other that contains the replacement. When
the initial intermediate CA is revoked, the chain containing the replacement
will still allow the client to successfully authenticate.
**N.B.**: Matching is performed by _serial number only_. For most CAs,
including Vault's `pki` method, multiple CRLs can successfully be used as
serial numbers are globally unique. However, since RFCs only specify that
serial numbers must be unique per-CA, some CAs issue serial numbers in-order,
which may cause clashes if attempting to use CRLs from two such CAs in the same
mount of the method. The workaround here is to mount multiple copies of the
`cert` method, configure each with one CA/CRL, and have clients connect to the
appropriate mount.
In addition, if a CRL distribution point is not set the method will not
fetch the CRLs itself, the CRL's designated time to next update is not
considered. If a CRL is no longer in use, it is up to the administrator to
remove it from the method.
In addition to automatic or manual CRL management, OCSP may be enabled for
a configured certificate, in which case Vault will query the OCSP server either
specified in the presented certificate or configured in the auth method to
check revocation.
## Authentication
### Via the CLI
The below authenticates against the `web` cert role by presenting a certificate
(`cert.pem`) and key (`key.pem`) signed by the CA associated with the `web` cert
role. Note that the name `web` ties to the configuration example below writing
to a path of `auth/cert/certs/web`. If a certificate role name is not specified,
the auth method will try to authenticate against all trusted certificates.
~> **NOTE** The `-ca-cert` value used here is for the Vault TLS Listener CA
certificate, not the CA that issued the client authentication certificate. This
can be omitted if the CA used to issue the Vault server certificate is trusted
by the local system executing this command.
```shell-session
$ vault login \
-method=cert \
-ca-cert=vault-ca.pem \
-client-cert=cert.pem \
-client-key=key.pem \
name=web
```
### Via the API
The endpoint for the login is `/login`. The client simply connects with their
TLS certificate and when the login endpoint is hit, the auth method will
determine if there is a matching trusted certificate to authenticate the client.
Optionally, you may specify a single certificate role to authenticate against.
~> **NOTE** The `--cacert` value used here is for the Vault TLS Listener CA
certificate, not the CA that issued the client authentication certificate. This
can be omitted if the CA used to issue the Vault server certificate is trusted
by the local system executing this command.
```shell-session
$ curl \
--request POST \
--cacert vault-ca.pem \
--cert cert.pem \
--key key.pem \
--data '{"name": "web"}' \
https://127.0.0.1:8200/v1/auth/cert/login
```
## Configuration
Auth methods must be configured in advance before users or machines can
authenticate. These steps are usually completed by an operator or configuration
management tool.
1. Enable the certificate auth method:
```text
$ vault auth enable cert
```
1. Configure it with trusted certificates that are allowed to authenticate:
```text
$ vault write auth/cert/certs/web \
display_name=web \
policies=web,prod \
[email protected] \
ttl=3600
```
This creates a new trusted certificate "web" with same display name and the
"web" and "prod" policies. The certificate (public key) used to verify
clients is given by the "web-cert.pem" file. Lastly, an optional `ttl` value
can be provided in seconds to limit the lease duration.
### Load Balancing / Proxying Consideration
If the Vault server is fronted by a reverse proxy or load balancer, TLS is
terminated before Vault. In that case the proxy must provide the validated
client certificate via header, and [configured in the Vault configuration's
listener stanza](/vault/docs/configuration/listener/tcp#tcp-listener-parameters).
Configure the listener with the header name that your load balancer provides.
In this mode, the security of authentication depends on the load balancer performing
full TLS verification to the client, and that the connection between the load
balancer and Vault is secured, ideally with Mutual TLS.
## API
The TLS Certificate auth method has a full HTTP API. Please see the
[TLS Certificate API](/vault/api-docs/auth/cert) for more details. | vault | layout docs page title TLS Certificates Auth Methods description The cert auth method allows users to authenticate with Vault using TLS client certificates TLS certificates auth method include x509 sha1 deprecation mdx The cert auth method allows authentication using SSL TLS client certificates which are either signed by a CA or self signed SSL TLS client certificates are defined as having an ExtKeyUsage extension with the usage set to either ClientAuth or Any The trusted certificates and CAs are configured directly to the auth method using the certs path This method cannot read trusted certificates from an external source CA certificates are associated with a role role names and CRL names are normalized to lower case Please note that to use this auth method tls disable and tls disable client certs must be false in the Vault configuration This is because the certificates are sent through TLS communication itself Revocation checking Since Vault 0 4 the method supports revocation checking An authorized user can submit PEM formatted CRLs identified by a given name these can be updated or deleted at will They may also set the URL of a trusted CRL distribution point and have Vault fetch the CRL as needed When there are CRLs present at the time of client authentication If the client presents any chain where no certificate in the chain matches a revoked serial number authentication is allowed If there is no chain presented by the client without a revoked serial number authentication is denied This method provides good security while also allowing for flexibility For instance if an intermediate CA is going to be retired a client can be configured with two certificate chains one that contains the initial intermediate CA in the path and the other that contains the replacement When the initial intermediate CA is revoked the chain containing the replacement will still allow the client to successfully authenticate N B Matching is performed by serial number only For most CAs including Vault s pki method multiple CRLs can successfully be used as serial numbers are globally unique However since RFCs only specify that serial numbers must be unique per CA some CAs issue serial numbers in order which may cause clashes if attempting to use CRLs from two such CAs in the same mount of the method The workaround here is to mount multiple copies of the cert method configure each with one CA CRL and have clients connect to the appropriate mount In addition if a CRL distribution point is not set the method will not fetch the CRLs itself the CRL s designated time to next update is not considered If a CRL is no longer in use it is up to the administrator to remove it from the method In addition to automatic or manual CRL management OCSP may be enabled for a configured certificate in which case Vault will query the OCSP server either specified in the presented certificate or configured in the auth method to check revocation Authentication Via the CLI The below authenticates against the web cert role by presenting a certificate cert pem and key key pem signed by the CA associated with the web cert role Note that the name web ties to the configuration example below writing to a path of auth cert certs web If a certificate role name is not specified the auth method will try to authenticate against all trusted certificates NOTE The ca cert value used here is for the Vault TLS Listener CA certificate not the CA that issued the client authentication certificate This can be omitted if the CA used to issue the Vault server certificate is trusted by the local system executing this command shell session vault login method cert ca cert vault ca pem client cert cert pem client key key pem name web Via the API The endpoint for the login is login The client simply connects with their TLS certificate and when the login endpoint is hit the auth method will determine if there is a matching trusted certificate to authenticate the client Optionally you may specify a single certificate role to authenticate against NOTE The cacert value used here is for the Vault TLS Listener CA certificate not the CA that issued the client authentication certificate This can be omitted if the CA used to issue the Vault server certificate is trusted by the local system executing this command shell session curl request POST cacert vault ca pem cert cert pem key key pem data name web https 127 0 0 1 8200 v1 auth cert login Configuration Auth methods must be configured in advance before users or machines can authenticate These steps are usually completed by an operator or configuration management tool 1 Enable the certificate auth method text vault auth enable cert 1 Configure it with trusted certificates that are allowed to authenticate text vault write auth cert certs web display name web policies web prod certificate web cert pem ttl 3600 This creates a new trusted certificate web with same display name and the web and prod policies The certificate public key used to verify clients is given by the web cert pem file Lastly an optional ttl value can be provided in seconds to limit the lease duration Load Balancing Proxying Consideration If the Vault server is fronted by a reverse proxy or load balancer TLS is terminated before Vault In that case the proxy must provide the validated client certificate via header and configured in the Vault configuration s listener stanza vault docs configuration listener tcp tcp listener parameters Configure the listener with the header name that your load balancer provides In this mode the security of authentication depends on the load balancer performing full TLS verification to the client and that the connection between the load balancer and Vault is secured ideally with Mutual TLS API The TLS Certificate auth method has a full HTTP API Please see the TLS Certificate API vault api docs auth cert for more details |
vault a Vault token for AliCloud entities Unlike most Vault auth methods this The AliCloud auth method allows automated authentication of AliCloud entities page title AliCloud Auth Methods The alicloud auth method provides an automated mechanism to retrieve layout docs AliCloud auth method | ---
layout: docs
page_title: AliCloud - Auth Methods
description: The AliCloud auth method allows automated authentication of AliCloud entities.
---
# AliCloud auth method
The `alicloud` auth method provides an automated mechanism to retrieve
a Vault token for AliCloud entities. Unlike most Vault auth methods, this
method does not require manual first-deploying, or provisioning
security-sensitive credentials (tokens, username/password, client certificates,
etc), by operators. It treats AliCloud as a Trusted Third Party and uses a
special AliCloud request signed with private credentials. A variety of credentials
can be used to construct the request, but AliCloud offers
[instance metadata](https://www.alibabacloud.com/help/faq-detail/49122.htm)
that's ideally suited for the purpose. By launching an instance with a role,
the role's STS credentials under instance metadata can be used to securely
build the request.
## Authentication workflow
The AliCloud STS API includes a method,
[`sts:GetCallerIdentity`](https://www.alibabacloud.com/help/doc-detail/43767.htm),
which allows you to validate the identity of a client. The client signs
a `GetCallerIdentity` query using the [AliCloud signature
algorithm](https://www.alibabacloud.com/help/doc-detail/67332.htm). It then
submits 2 pieces of information to the Vault server to recreate a valid signed
request: the request URL, and the request headers. The Vault server then
reconstructs the query and forwards it on to the AliCloud STS service and validates
the result back.
Importantly, the credentials used to sign the GetCallerIdentity request can come
from the ECS instance metadata service for an ECS instance, which obviates the
need for an operator to manually provision some sort of identity material first.
However, the credentials can, in principle, come from anywhere, not just from
the locations AliCloud has provided for you.
Each signed AliCloud request includes the current timestamp and a nonce to mitigate
the risk of replay attacks.
It's also important to note that AliCloud does NOT include any sort
of authorization around calls to `GetCallerIdentity`. For example, if you have
a RAM policy on your credential that requires all access to be MFA authenticated,
non-MFA authenticated credentials will still be able to authenticate to Vault
using this method. It does not appear possible to enforce a RAM principal to be
MFA authenticated while authenticating to Vault.
## Authorization workflow
The basic mechanism of operation is per-role.
Roles are associated with a role ARN that has been pre-created in AliCloud.
AliCloud's console displays each role's ARN. A role in Vault has a 1:1 relationship
with a role in AliCloud, and must bear the same name.
When a client assumes that role and sends its `GetCallerIdentity` request to Vault,
Vault matches the arn of its assumed role with that of a pre-created role in Vault.
It then checks what policies have been associated with the role, and grants a
token accordingly.
## Authentication
### Via the CLI
#### Enable AliCloud authentication in Vault.
```shell-session
$ vault auth enable alicloud
```
#### Configure the policies on the role.
```shell-session
$ vault write auth/alicloud/role/dev-role arn='acs:ram::5138828231865461:role/dev-role'
```
#### Perform the login operation
```shell-session
$ vault write auth/alicloud/login \
role=dev-role \
identity_request_url=$IDENTITY_REQUEST_URL_BASE_64 \
identity_request_headers=$IDENTITY_REQUEST_HEADERS_BASE_64
```
For the RAM auth method, generating the signed request is a non-standard
operation. The Vault CLI supports generating this for you:
```shell-session
$ vault login -method=alicloud access_key=... secret_key=... security_token=... region=...
```
This assumes you have the AliCloud credentials you would find on an ECS instance using the
following call:
```
curl 'http://100.100.100.200/latest/meta-data/ram/security-credentials/$ROLE_NAME'
```
Please note the `$ROLE_NAME` above is case-sensitive and must be consistent with how it's reflected
on the instance.
An example of how to generate the required request values for the `login` method
can be found found in the
[Vault CLI source code](https://github.com/hashicorp/vault-plugin-auth-alicloud/blob/master/tools/tools.go).
## API
The AliCloud auth method has a full HTTP API. Please see the
[AliCloud Auth API](/vault/api-docs/auth/alicloud) for more
details. | vault | layout docs page title AliCloud Auth Methods description The AliCloud auth method allows automated authentication of AliCloud entities AliCloud auth method The alicloud auth method provides an automated mechanism to retrieve a Vault token for AliCloud entities Unlike most Vault auth methods this method does not require manual first deploying or provisioning security sensitive credentials tokens username password client certificates etc by operators It treats AliCloud as a Trusted Third Party and uses a special AliCloud request signed with private credentials A variety of credentials can be used to construct the request but AliCloud offers instance metadata https www alibabacloud com help faq detail 49122 htm that s ideally suited for the purpose By launching an instance with a role the role s STS credentials under instance metadata can be used to securely build the request Authentication workflow The AliCloud STS API includes a method sts GetCallerIdentity https www alibabacloud com help doc detail 43767 htm which allows you to validate the identity of a client The client signs a GetCallerIdentity query using the AliCloud signature algorithm https www alibabacloud com help doc detail 67332 htm It then submits 2 pieces of information to the Vault server to recreate a valid signed request the request URL and the request headers The Vault server then reconstructs the query and forwards it on to the AliCloud STS service and validates the result back Importantly the credentials used to sign the GetCallerIdentity request can come from the ECS instance metadata service for an ECS instance which obviates the need for an operator to manually provision some sort of identity material first However the credentials can in principle come from anywhere not just from the locations AliCloud has provided for you Each signed AliCloud request includes the current timestamp and a nonce to mitigate the risk of replay attacks It s also important to note that AliCloud does NOT include any sort of authorization around calls to GetCallerIdentity For example if you have a RAM policy on your credential that requires all access to be MFA authenticated non MFA authenticated credentials will still be able to authenticate to Vault using this method It does not appear possible to enforce a RAM principal to be MFA authenticated while authenticating to Vault Authorization workflow The basic mechanism of operation is per role Roles are associated with a role ARN that has been pre created in AliCloud AliCloud s console displays each role s ARN A role in Vault has a 1 1 relationship with a role in AliCloud and must bear the same name When a client assumes that role and sends its GetCallerIdentity request to Vault Vault matches the arn of its assumed role with that of a pre created role in Vault It then checks what policies have been associated with the role and grants a token accordingly Authentication Via the CLI Enable AliCloud authentication in Vault shell session vault auth enable alicloud Configure the policies on the role shell session vault write auth alicloud role dev role arn acs ram 5138828231865461 role dev role Perform the login operation shell session vault write auth alicloud login role dev role identity request url IDENTITY REQUEST URL BASE 64 identity request headers IDENTITY REQUEST HEADERS BASE 64 For the RAM auth method generating the signed request is a non standard operation The Vault CLI supports generating this for you shell session vault login method alicloud access key secret key security token region This assumes you have the AliCloud credentials you would find on an ECS instance using the following call curl http 100 100 100 200 latest meta data ram security credentials ROLE NAME Please note the ROLE NAME above is case sensitive and must be consistent with how it s reflected on the instance An example of how to generate the required request values for the login method can be found found in the Vault CLI source code https github com hashicorp vault plugin auth alicloud blob master tools tools go API The AliCloud auth method has a full HTTP API Please see the AliCloud Auth API vault api docs auth alicloud for more details |
vault Cloud Foundry CF auth method page title Cloud Foundry Auth Methods The cf auth method allows automated authentication of Cloud Foundry instances layout docs include x509 sha1 deprecation mdx | ---
layout: docs
page_title: Cloud Foundry - Auth Methods
description: The cf auth method allows automated authentication of Cloud Foundry instances.
---
# Cloud Foundry (CF) auth method
@include 'x509-sha1-deprecation.mdx'
The `cf` auth method provides an automated mechanism to retrieve a Vault token
for CF instances. It leverages CF's [App and Container Identity Assurance](https://content.pivotal.io/blog/new-in-pcf-2-1-app-container-identity-assurance-via-automatic-cert-rotation).
At a high level, this works as follows:
1. You construct a request to Vault including your `CF_INSTANCE_CERT`, signed by your `CF_INSTANCE_KEY`.
2. Vault validates that the signature is no more than 300 seconds old, or 60 seconds in the future.
3. Vault validates that the cert was issued by the CA certificate you've pre-configured.
4. Vault validates that the request was signed by the private key for the `CF_INSTANCE_CERT`.
5. Vault validates that the `CF_INSTANCE_CERT` application ID, space ID, and org ID presently exist.
6. If all checks pass, Vault issues an appropriately-scoped token.
## Known risks
This authentication engine uses CF's instance identity service to authenticate users to Vault. Because
CF makes its CA certificate and private key available to certain users at any time, it's possible for
someone with access to them to self-issue identity certificates that meet the criteria for a Vault role,
allowing them to gain unintended access to Vault.
For this reason, we recommend that if you enable this auth method, you carefully guard access to the
private key for your instance identity CA certificate. In CredHub, it can be obtained through the
following call: `$ credhub get -n /cf/diego-instance-identity-root-ca`.
Take extra steps to limit access to that path in CredHub, whether it be through use of CredHub's ACL
system, or through carefully limiting the users who can access CredHub.
## Usage
### Preparing to configure the plugin
To configure this plugin, you'll need to gather the CA certificate that CF uses to issue each `CF_INSTANCE_CERT`,
and you'll need to configure it to access the CF API.
To gain your instance identity CA certificate, in the [cf dev](https://github.com/cloudfoundry-incubator/cfdev)
environment it can be found using:
```shell-session
$ bosh int --path /diego_instance_identity_ca ~/.cfdev/state/bosh/creds.yml
```
In environments containing Ops Manager, it can be found in CredHub. To gain access to CredHub, first install
[the PCF command-line utility](https://docs.pivotal.io/tiledev/2-2/pcf-command.html) and authenticate to it
using the `metadata` file it describes. These instructions also use [jq](https://stedolan.github.io/jq/) for
ease of drilling into the particular part of the response you'll need.
Once those steps are complete, get the credentials you'll use for CredHub:
```shell-session
$ pcf settings | jq '.products[0].director_credhub_client_credentials'
```
SSH into your Ops Manager VM:
```shell-session
$ ssh -i ops_mgr.pem ubuntu@$OPS_MGR_URL
```
Please note that the above OPS_MGR_URL shouldn't be prepended with `https://`.
Log into CredHub with the credentials you obtained earlier:
```shell-session
$ credhub login --client-name=director_to_credhub --client-secret=some-secret
```
And view the root certificate CF uses to issue instance identity certificates:
```shell-session
$ credhub get -n /cf/diego-instance-identity-root-ca
```
The output to that call will include two certificates and one RSA key. You will need to copy the certificate
under `ca: |` and place it into a file on your local machine that's properly formatted. Here's an example of
a properly formatted CA certificate:
```shell-session
$ cat ca.crt
-----BEGIN CERTIFICATE-----
MIIDNDCCAhygAwIBAgITPqTy1qvfHNEVuxsl9l1glY85OTANBgkqhkiG9w0BAQsF
ADAqMSgwJgYDVQQDEx9EaWVnbyBJbnN0YW5jZSBJZGVudGl0eSBSb290IENBMB4X
DTE5MDYwNjA5MTIwMVoXDTIyMDYwNTA5MTIwMVowKjEoMCYGA1UEAxMfRGllZ28g
SW5zdGFuY2UgSWRlbnRpdHkgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEP
ADCCAQoCggEBALa8xGDYT/q3UzEKAsLDajhuHxPpIPFlCXwp6u8U5Qrf427Xof7n
rXRKzRu3g7E20U/OwzgBi3VZs8T29JGNWeA2k0HtX8oQ+Wc8Qngz9M8t1h9SZlx5
fGfxPt3x7xozaIGJ8p4HKQH1ZlirL7dzun7Y+7m6Ey8cMVsepqUs64r8+KpCbxKJ
rV04qtTNlr0LG3yOxSHlip+DDvUVL3jSFz/JDWxwCymiFBAh0QjG1LKp2FisURoX
GY+HJbf2StpK3i4dYnxQXQlMDpipozK7WFxv3gH4Q6YMZvlmIPidAF8FxfDIsYcq
TgQ5q0pr9mbu8oKbZ74vyZMqiy+r9vLhbu0CAwEAAaNTMFEwHQYDVR0OBBYEFAHf
pwqBhZ8/A6ZAvU+p5JPz/omjMB8GA1UdIwQYMBaAFAHfpwqBhZ8/A6ZAvU+p5JPz
/omjMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBADuDJev+6bOC
v7t9SS4Nd/zeREuF9IKsHDHrYUZBIO1aBQbOO1iDtL4VA3LBEx6fOgN5fbxroUsz
X9/6PtxLe+5U8i5MOztK+OxxPrtDfnblXVb6IW4EKhTnWesS7R2WnOWtzqRQXKFU
voBn3QckLV1o9eqzYIE/aob4z0GaVanA9PSzzbVPsX79RCD1B7NmV0cKEQ7IrCrh
L7ElDV/GlNrtVdHjY0mwz9iu+0YJvxvcHDTERi106b28KXzJz+P5/hyg2wqRXzdI
faXAjW0kuq5nxyJUALwxD/8pz77uNt4w6WfJoSDM6XrAIhh15K3tZg9EzBmAZ/5D
jK0RcmCyaXw=
-----END CERTIFICATE-----
```
An easy way to verify that your CA certificate is properly formatted is using OpenSSL like so:
```shell-session
$ openssl x509 -in ca.crt -text -noout
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
3e:a4:f2:d6:ab:df:1c:d1:15:bb:1b:25:f6:5d:60:95:8f:39:39
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=Diego Instance Identity Root CA
Validity
Not Before: Jun 6 09:12:01 2019 GMT
Not After : Jun 5 09:12:01 2022 GMT
Subject: CN=Diego Instance Identity Root CA
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:b6:bc:c4:60:d8:4f:fa:b7:53:31:0a:02:c2:c3:
6a:38:6e:1f:13:e9:20:f1:65:09:7c:29:ea:ef:14:
e5:0a:df:e3:6e:d7:a1:fe:e7:ad:74:4a:cd:1b:b7:
83:b1:36:d1:4f:ce:c3:38:01:8b:75:59:b3:c4:f6:
f4:91:8d:59:e0:36:93:41:ed:5f:ca:10:f9:67:3c:
42:78:33:f4:cf:2d:d6:1f:52:66:5c:79:7c:67:f1:
3e:dd:f1:ef:1a:33:68:81:89:f2:9e:07:29:01:f5:
66:58:ab:2f:b7:73:ba:7e:d8:fb:b9:ba:13:2f:1c:
31:5b:1e:a6:a5:2c:eb:8a:fc:f8:aa:42:6f:12:89:
ad:5d:38:aa:d4:cd:96:bd:0b:1b:7c:8e:c5:21:e5:
8a:9f:83:0e:f5:15:2f:78:d2:17:3f:c9:0d:6c:70:
0b:29:a2:14:10:21:d1:08:c6:d4:b2:a9:d8:58:ac:
51:1a:17:19:8f:87:25:b7:f6:4a:da:4a:de:2e:1d:
62:7c:50:5d:09:4c:0e:98:a9:a3:32:bb:58:5c:6f:
de:01:f8:43:a6:0c:66:f9:66:20:f8:9d:00:5f:05:
c5:f0:c8:b1:87:2a:4e:04:39:ab:4a:6b:f6:66:ee:
f2:82:9b:67:be:2f:c9:93:2a:8b:2f:ab:f6:f2:e1:
6e:ed
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Key Identifier:
01:DF:A7:0A:81:85:9F:3F:03:A6:40:BD:4F:A9:E4:93:F3:FE:89:A3
X509v3 Authority Key Identifier:
keyid:01:DF:A7:0A:81:85:9F:3F:03:A6:40:BD:4F:A9:E4:93:F3:FE:89:A3
X509v3 Basic Constraints: critical
CA:TRUE
Signature Algorithm: sha256WithRSAEncryption
3b:83:25:eb:fe:e9:b3:82:bf:bb:7d:49:2e:0d:77:fc:de:44:
4b:85:f4:82:ac:1c:31:eb:61:46:41:20:ed:5a:05:06:ce:3b:
58:83:b4:be:15:03:72:c1:13:1e:9f:3a:03:79:7d:bc:6b:a1:
4b:33:5f:df:fa:3e:dc:4b:7b:ee:54:f2:2e:4c:3b:3b:4a:f8:
ec:71:3e:bb:43:7e:76:e5:5d:56:fa:21:6e:04:2a:14:e7:59:
eb:12:ed:1d:96:9c:e5:ad:ce:a4:50:5c:a1:54:be:80:67:dd:
07:24:2d:5d:68:f5:ea:b3:60:81:3f:6a:86:f8:cf:41:9a:55:
a9:c0:f4:f4:b3:cd:b5:4f:b1:7e:fd:44:20:f5:07:b3:66:57:
47:0a:11:0e:c8:ac:2a:e1:2f:b1:25:0d:5f:c6:94:da:ed:55:
d1:e3:63:49:b0:cf:d8:ae:fb:46:09:bf:1b:dc:1c:34:c4:46:
2d:74:e9:bd:bc:29:7c:c9:cf:e3:f9:fe:1c:a0:db:0a:91:5f:
37:48:7d:a5:c0:8d:6d:24:ba:ae:67:c7:22:54:00:bc:31:0f:
ff:29:cf:be:ee:36:de:30:e9:67:c9:a1:20:cc:e9:7a:c0:22:
18:75:e4:ad:ed:66:0f:44:cc:19:80:67:fe:43:8c:ad:11:72:
60:b2:69:7c
```
You will also need to configure access to the CF API. To prepare for this, we will now
use the [cf](https://docs.cloudfoundry.org/cf-cli/install-go-cli.html) command-line tool.
First, while in the directory containing the `metadata` file you used earlier to authenticate
to CF, run `$ pcf target`. This points the `cf` tool at the same place as the `pcf` tool. Next,
run `$ cf api` to view the API endpoint that Vault will use.
Next, configure a user for Vault to use. This plugin was tested with Org Manager level
permissions, but lower level permissions _may_ be usable.
```shell-session
$ cf create-user vault pa55w0rd
$ cf orgs
$ cf org-users my-example-org
$ cf set-org-role vault my-example-org OrgManager
```
Specifically, the `vault` user created here will need to be able to perform the following API calls:
- Method: "GET", endpoint: "/v2/info"
- Method: "POST", endpoint: "/oauth/token"
- Method: "GET", endpoint: "/v2/apps/\$APP_ID"
- Method: "GET", endpoint: "/v2/organizations/\$ORG_ID"
- Method: "GET", endpoint: "/v2/spaces/\$SPACE_ID"
Next, PCF often uses a self-signed certificate for TLS, which can be rejected at first
with an error like:
<CodeBlockConfig hideClipboard>
```plaintext
x509: certificate signed by unknown authority
```
</CodeBlockConfig>
If you encounter this error, you will need to first gain a copy of the certificate that CF
is using for the API via:
```shell-session
$ openssl s_client -showcerts -servername domain.com -connect domain.com:443
```
Here is an example of a real call:
```shell-session
$ openssl s_client -showcerts -servername api.sys.somewhere.cf-app.com -connect api.sys.somewhere.cf-app.com:443
```
Part of the response will contain a certificate, which you'll need to copy and paste to
a well-formatted local file. Please see `ca.crt` above for an example of how the certificate
should look, and how to verify it can be parsed using `openssl`. The walkthrough below presumes
you name this file `cfapi.crt`.
### Walkthrough
After obtaining the information described above, a Vault operator will configure the CF auth method
like so:
```shell-session
$ vault auth enable cf
$ vault write auth/cf/config \
[email protected] \
cf_api_addr=https://api.dev.cfdev.sh \
cf_username=vault \
cf_password=pa55w0rd \
[email protected]
$ vault write auth/cf/roles/my-role \
bound_application_ids=2d3e834a-3a25-4591-974c-fa5626d5d0a1 \
bound_space_ids=3d2eba6b-ef19-44d5-91dd-1975b0db5cc9 \
bound_organization_ids=34a878d0-c2f9-4521-ba73-a9f664e82c7bf \
policies=my-policy
```
Once configured, from a CF instance containing real values for the `CF_INSTANCE_CERT` and
`CF_INSTANCE_KEY`, login can be performed using:
```shell-session
$ vault login -method=cf role=test-role
```
For CF, we do also offer an agent that, once configured, can be used to obtain a Vault token on
your behalf.
### Enabling mutual TLS with the CF API
The CF API can be configured to require mutual TLS with clients. This plugin supports mutual TLS by setting the
`cf_api_mutual_tls_certificate` and `cf_api_mutual_tls_key` configuration properties.
<CodeBlockConfig highlight="7-8">
```shell-session
$ vault write auth/cf/config \
[email protected] \
cf_api_addr=https://api.dev.cfdev.sh \
cf_username=vault \
cf_password=pa55w0rd \
[email protected] \
[email protected] \
[email protected]
```
</CodeBlockConfig>
The provided certificate must be signed by a certificate authority trusted by the CF API. Obtaining such a certificate
depends on the specifics of your deployment of Cloud Foundry.
### Maintenance
In testing we found that CF instance identity CA certificates were set to expire in 3 years. Some
CF docs indicate they expire every 4 years. However long they last, at some point you may need
to add another CA certificate - one that's soon to expire, and one that is currently or soon-to-be
valid.
```shell-session
$ CURRENT=$(cat /path/to/current-ca.crt)
$ FUTURE=$(cat /path/to/future-ca.crt)
$ vault write auth/vault-plugin-auth-cf/config identity_ca_certificates="$CURRENT" identity_ca_certificates="$FUTURE"
```
If Vault receives a `CF_INSTANCE_CERT` matching _any_ of the `identity_ca_certificates`,
the instance cert will be considered valid.
A similar approach can be taken to update the `cf_api_trusted_certificates`.
### Troubleshooting At-A-Glance
If you receive an error containing `x509: certificate signed by unknown authority`, set
`cf_api_trusted_certificates` as described above.
If you're unable to authenticate using the `CF_INSTANCE_CERT`, first obtain a current copy
of your `CF_INSTANCE_CERT` and copy it to your local environment. Then divide it into two
files, each being a distinct certificate. The first certificate tends to be the actual
`identity.crt`, and the second one tends to be the `intermediate.crt`. Verify each are
properly named and formatted using a command like:
```shell-session
$ openssl x509 -in ca.crt -text -noout
```
Then, verify that the certificates are properly chained to the `ca.crt` you've configured:
```shell-session
$ openssl verify -CAfile ca.crt -untrusted intermediate.crt identity.crt
```
This should show a success response. If it doesn't, try to identify the root cause, be it
an expired certificate, an incorrect `ca.crt`, or a Vault configuration that doesn't
match the certificates you're checking.
## API
The CF auth method has a full HTTP API. Please see the [CF Auth API](/vault/api-docs/auth/cf)
for more details. | vault | layout docs page title Cloud Foundry Auth Methods description The cf auth method allows automated authentication of Cloud Foundry instances Cloud Foundry CF auth method include x509 sha1 deprecation mdx The cf auth method provides an automated mechanism to retrieve a Vault token for CF instances It leverages CF s App and Container Identity Assurance https content pivotal io blog new in pcf 2 1 app container identity assurance via automatic cert rotation At a high level this works as follows 1 You construct a request to Vault including your CF INSTANCE CERT signed by your CF INSTANCE KEY 2 Vault validates that the signature is no more than 300 seconds old or 60 seconds in the future 3 Vault validates that the cert was issued by the CA certificate you ve pre configured 4 Vault validates that the request was signed by the private key for the CF INSTANCE CERT 5 Vault validates that the CF INSTANCE CERT application ID space ID and org ID presently exist 6 If all checks pass Vault issues an appropriately scoped token Known risks This authentication engine uses CF s instance identity service to authenticate users to Vault Because CF makes its CA certificate and private key available to certain users at any time it s possible for someone with access to them to self issue identity certificates that meet the criteria for a Vault role allowing them to gain unintended access to Vault For this reason we recommend that if you enable this auth method you carefully guard access to the private key for your instance identity CA certificate In CredHub it can be obtained through the following call credhub get n cf diego instance identity root ca Take extra steps to limit access to that path in CredHub whether it be through use of CredHub s ACL system or through carefully limiting the users who can access CredHub Usage Preparing to configure the plugin To configure this plugin you ll need to gather the CA certificate that CF uses to issue each CF INSTANCE CERT and you ll need to configure it to access the CF API To gain your instance identity CA certificate in the cf dev https github com cloudfoundry incubator cfdev environment it can be found using shell session bosh int path diego instance identity ca cfdev state bosh creds yml In environments containing Ops Manager it can be found in CredHub To gain access to CredHub first install the PCF command line utility https docs pivotal io tiledev 2 2 pcf command html and authenticate to it using the metadata file it describes These instructions also use jq https stedolan github io jq for ease of drilling into the particular part of the response you ll need Once those steps are complete get the credentials you ll use for CredHub shell session pcf settings jq products 0 director credhub client credentials SSH into your Ops Manager VM shell session ssh i ops mgr pem ubuntu OPS MGR URL Please note that the above OPS MGR URL shouldn t be prepended with https Log into CredHub with the credentials you obtained earlier shell session credhub login client name director to credhub client secret some secret And view the root certificate CF uses to issue instance identity certificates shell session credhub get n cf diego instance identity root ca The output to that call will include two certificates and one RSA key You will need to copy the certificate under ca and place it into a file on your local machine that s properly formatted Here s an example of a properly formatted CA certificate shell session cat ca crt BEGIN CERTIFICATE MIIDNDCCAhygAwIBAgITPqTy1qvfHNEVuxsl9l1glY85OTANBgkqhkiG9w0BAQsF ADAqMSgwJgYDVQQDEx9EaWVnbyBJbnN0YW5jZSBJZGVudGl0eSBSb290IENBMB4X DTE5MDYwNjA5MTIwMVoXDTIyMDYwNTA5MTIwMVowKjEoMCYGA1UEAxMfRGllZ28g SW5zdGFuY2UgSWRlbnRpdHkgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEP ADCCAQoCggEBALa8xGDYT q3UzEKAsLDajhuHxPpIPFlCXwp6u8U5Qrf427Xof7n rXRKzRu3g7E20U OwzgBi3VZs8T29JGNWeA2k0HtX8oQ Wc8Qngz9M8t1h9SZlx5 fGfxPt3x7xozaIGJ8p4HKQH1ZlirL7dzun7Y 7m6Ey8cMVsepqUs64r8 KpCbxKJ rV04qtTNlr0LG3yOxSHlip DDvUVL3jSFz JDWxwCymiFBAh0QjG1LKp2FisURoX GY HJbf2StpK3i4dYnxQXQlMDpipozK7WFxv3gH4Q6YMZvlmIPidAF8FxfDIsYcq TgQ5q0pr9mbu8oKbZ74vyZMqiy r9vLhbu0CAwEAAaNTMFEwHQYDVR0OBBYEFAHf pwqBhZ8 A6ZAvU p5JPz omjMB8GA1UdIwQYMBaAFAHfpwqBhZ8 A6ZAvU p5JPz omjMA8GA1UdEwEB wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBADuDJev 6bOC v7t9SS4Nd zeREuF9IKsHDHrYUZBIO1aBQbOO1iDtL4VA3LBEx6fOgN5fbxroUsz X9 6PtxLe 5U8i5MOztK OxxPrtDfnblXVb6IW4EKhTnWesS7R2WnOWtzqRQXKFU voBn3QckLV1o9eqzYIE aob4z0GaVanA9PSzzbVPsX79RCD1B7NmV0cKEQ7IrCrh L7ElDV GlNrtVdHjY0mwz9iu 0YJvxvcHDTERi106b28KXzJz P5 hyg2wqRXzdI faXAjW0kuq5nxyJUALwxD 8pz77uNt4w6WfJoSDM6XrAIhh15K3tZg9EzBmAZ 5D jK0RcmCyaXw END CERTIFICATE An easy way to verify that your CA certificate is properly formatted is using OpenSSL like so shell session openssl x509 in ca crt text noout Certificate Data Version 3 0x2 Serial Number 3e a4 f2 d6 ab df 1c d1 15 bb 1b 25 f6 5d 60 95 8f 39 39 Signature Algorithm sha256WithRSAEncryption Issuer CN Diego Instance Identity Root CA Validity Not Before Jun 6 09 12 01 2019 GMT Not After Jun 5 09 12 01 2022 GMT Subject CN Diego Instance Identity Root CA Subject Public Key Info Public Key Algorithm rsaEncryption Public Key 2048 bit Modulus 00 b6 bc c4 60 d8 4f fa b7 53 31 0a 02 c2 c3 6a 38 6e 1f 13 e9 20 f1 65 09 7c 29 ea ef 14 e5 0a df e3 6e d7 a1 fe e7 ad 74 4a cd 1b b7 83 b1 36 d1 4f ce c3 38 01 8b 75 59 b3 c4 f6 f4 91 8d 59 e0 36 93 41 ed 5f ca 10 f9 67 3c 42 78 33 f4 cf 2d d6 1f 52 66 5c 79 7c 67 f1 3e dd f1 ef 1a 33 68 81 89 f2 9e 07 29 01 f5 66 58 ab 2f b7 73 ba 7e d8 fb b9 ba 13 2f 1c 31 5b 1e a6 a5 2c eb 8a fc f8 aa 42 6f 12 89 ad 5d 38 aa d4 cd 96 bd 0b 1b 7c 8e c5 21 e5 8a 9f 83 0e f5 15 2f 78 d2 17 3f c9 0d 6c 70 0b 29 a2 14 10 21 d1 08 c6 d4 b2 a9 d8 58 ac 51 1a 17 19 8f 87 25 b7 f6 4a da 4a de 2e 1d 62 7c 50 5d 09 4c 0e 98 a9 a3 32 bb 58 5c 6f de 01 f8 43 a6 0c 66 f9 66 20 f8 9d 00 5f 05 c5 f0 c8 b1 87 2a 4e 04 39 ab 4a 6b f6 66 ee f2 82 9b 67 be 2f c9 93 2a 8b 2f ab f6 f2 e1 6e ed Exponent 65537 0x10001 X509v3 extensions X509v3 Subject Key Identifier 01 DF A7 0A 81 85 9F 3F 03 A6 40 BD 4F A9 E4 93 F3 FE 89 A3 X509v3 Authority Key Identifier keyid 01 DF A7 0A 81 85 9F 3F 03 A6 40 BD 4F A9 E4 93 F3 FE 89 A3 X509v3 Basic Constraints critical CA TRUE Signature Algorithm sha256WithRSAEncryption 3b 83 25 eb fe e9 b3 82 bf bb 7d 49 2e 0d 77 fc de 44 4b 85 f4 82 ac 1c 31 eb 61 46 41 20 ed 5a 05 06 ce 3b 58 83 b4 be 15 03 72 c1 13 1e 9f 3a 03 79 7d bc 6b a1 4b 33 5f df fa 3e dc 4b 7b ee 54 f2 2e 4c 3b 3b 4a f8 ec 71 3e bb 43 7e 76 e5 5d 56 fa 21 6e 04 2a 14 e7 59 eb 12 ed 1d 96 9c e5 ad ce a4 50 5c a1 54 be 80 67 dd 07 24 2d 5d 68 f5 ea b3 60 81 3f 6a 86 f8 cf 41 9a 55 a9 c0 f4 f4 b3 cd b5 4f b1 7e fd 44 20 f5 07 b3 66 57 47 0a 11 0e c8 ac 2a e1 2f b1 25 0d 5f c6 94 da ed 55 d1 e3 63 49 b0 cf d8 ae fb 46 09 bf 1b dc 1c 34 c4 46 2d 74 e9 bd bc 29 7c c9 cf e3 f9 fe 1c a0 db 0a 91 5f 37 48 7d a5 c0 8d 6d 24 ba ae 67 c7 22 54 00 bc 31 0f ff 29 cf be ee 36 de 30 e9 67 c9 a1 20 cc e9 7a c0 22 18 75 e4 ad ed 66 0f 44 cc 19 80 67 fe 43 8c ad 11 72 60 b2 69 7c You will also need to configure access to the CF API To prepare for this we will now use the cf https docs cloudfoundry org cf cli install go cli html command line tool First while in the directory containing the metadata file you used earlier to authenticate to CF run pcf target This points the cf tool at the same place as the pcf tool Next run cf api to view the API endpoint that Vault will use Next configure a user for Vault to use This plugin was tested with Org Manager level permissions but lower level permissions may be usable shell session cf create user vault pa55w0rd cf orgs cf org users my example org cf set org role vault my example org OrgManager Specifically the vault user created here will need to be able to perform the following API calls Method GET endpoint v2 info Method POST endpoint oauth token Method GET endpoint v2 apps APP ID Method GET endpoint v2 organizations ORG ID Method GET endpoint v2 spaces SPACE ID Next PCF often uses a self signed certificate for TLS which can be rejected at first with an error like CodeBlockConfig hideClipboard plaintext x509 certificate signed by unknown authority CodeBlockConfig If you encounter this error you will need to first gain a copy of the certificate that CF is using for the API via shell session openssl s client showcerts servername domain com connect domain com 443 Here is an example of a real call shell session openssl s client showcerts servername api sys somewhere cf app com connect api sys somewhere cf app com 443 Part of the response will contain a certificate which you ll need to copy and paste to a well formatted local file Please see ca crt above for an example of how the certificate should look and how to verify it can be parsed using openssl The walkthrough below presumes you name this file cfapi crt Walkthrough After obtaining the information described above a Vault operator will configure the CF auth method like so shell session vault auth enable cf vault write auth cf config identity ca certificates ca crt cf api addr https api dev cfdev sh cf username vault cf password pa55w0rd cf api trusted certificates cfapi crt vault write auth cf roles my role bound application ids 2d3e834a 3a25 4591 974c fa5626d5d0a1 bound space ids 3d2eba6b ef19 44d5 91dd 1975b0db5cc9 bound organization ids 34a878d0 c2f9 4521 ba73 a9f664e82c7bf policies my policy Once configured from a CF instance containing real values for the CF INSTANCE CERT and CF INSTANCE KEY login can be performed using shell session vault login method cf role test role For CF we do also offer an agent that once configured can be used to obtain a Vault token on your behalf Enabling mutual TLS with the CF API The CF API can be configured to require mutual TLS with clients This plugin supports mutual TLS by setting the cf api mutual tls certificate and cf api mutual tls key configuration properties CodeBlockConfig highlight 7 8 shell session vault write auth cf config identity ca certificates ca crt cf api addr https api dev cfdev sh cf username vault cf password pa55w0rd cf api trusted certificates cfapi crt cf api mutual tls certificate cfmutualtls crt cf api mutual tls key cfmutualtls key CodeBlockConfig The provided certificate must be signed by a certificate authority trusted by the CF API Obtaining such a certificate depends on the specifics of your deployment of Cloud Foundry Maintenance In testing we found that CF instance identity CA certificates were set to expire in 3 years Some CF docs indicate they expire every 4 years However long they last at some point you may need to add another CA certificate one that s soon to expire and one that is currently or soon to be valid shell session CURRENT cat path to current ca crt FUTURE cat path to future ca crt vault write auth vault plugin auth cf config identity ca certificates CURRENT identity ca certificates FUTURE If Vault receives a CF INSTANCE CERT matching any of the identity ca certificates the instance cert will be considered valid A similar approach can be taken to update the cf api trusted certificates Troubleshooting At A Glance If you receive an error containing x509 certificate signed by unknown authority set cf api trusted certificates as described above If you re unable to authenticate using the CF INSTANCE CERT first obtain a current copy of your CF INSTANCE CERT and copy it to your local environment Then divide it into two files each being a distinct certificate The first certificate tends to be the actual identity crt and the second one tends to be the intermediate crt Verify each are properly named and formatted using a command like shell session openssl x509 in ca crt text noout Then verify that the certificates are properly chained to the ca crt you ve configured shell session openssl verify CAfile ca crt untrusted intermediate crt identity crt This should show a success response If it doesn t try to identify the root cause be it an expired certificate an incorrect ca crt or a Vault configuration that doesn t match the certificates you re checking API The CF auth method has a full HTTP API Please see the CF Auth API vault api docs auth cf for more details |
vault The azure auth method plugin allows automated authentication of Azure Active Directory layout docs page title Azure Auth Methods Azure auth method | ---
layout: docs
page_title: Azure - Auth Methods
description: |-
The azure auth method plugin allows automated authentication of Azure Active
Directory.
---
# Azure auth method
The `azure` auth method allows authentication against Vault using
Azure Active Directory credentials. It treats Azure as a Trusted Third Party
and expects a [JSON Web Token (JWT)](https://tools.ietf.org/html/rfc7519)
signed by Azure Active Directory for the configured tenant.
This method supports authentication for system-assigned and user-assigned
managed identities. See [Managed identities for Azure resources](https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview)
for more information about these resources.
This documentation assumes the Azure method is mounted at the `/auth/azure`
path in Vault. Since it is possible to enable auth methods at any location,
please update your API calls accordingly.
## Prerequisites:
The Azure auth method requires client credentials to access Azure APIs. The following
are required to configure the auth method:
- A configured [Azure AD application](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-integrating-applications)
which is used as the resource for generating MSI access tokens.
- Client credentials (shared secret) with read access to particular Azure Resource Manager
resources. See [Azure AD Service to Service Client Credentials](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-protocols-oauth-service-to-service).
If Vault is hosted on Azure, Vault can use MSI to access Azure instead of a shared secret.
A managed identity must be [enabled](https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/)
on the resource that acquires the access token.
The following Azure [role assignments](https://learn.microsoft.com/en-us/azure/role-based-access-control/overview#role-assignments)
must be granted to the Azure AD application in order for the auth method to access Azure
APIs during authentication.
### Role assignments
~> **Note:** The role assignments are only required when the
[`vm_name`](/vault/api-docs/auth/azure#vm_name), [`vmss_name`](/vault/api-docs/auth/azure#vmss_name),
or [`resource_id`](/vault/api-docs/auth/azure#resource_id) parameters are used on login.
| Azure Environment | Login Parameter | Azure API Permission |
| ----------- | --------------- | -------------------- |
| Virtual Machine | [`vm_name`](/vault/api-docs/auth/azure#vm_name) | `Microsoft.Compute/virtualMachines/*/read` |
| Virtual Machine Scale Set ([Uniform Orchestration][vmss-uniform]) | [`vmss_name`](/vault/api-docs/auth/azure#vmss_name) | `Microsoft.Compute/virtualMachineScaleSets/*/read` |
| Virtual Machine Scale Set ([Flexible Orchestration][vmss-flex]) | [`vmss_name`](/vault/api-docs/auth/azure#vmss_name) | `Microsoft.Compute/virtualMachineScaleSets/*/read` `Microsoft.ManagedIdentity/userAssignedIdentities/*/read` |
| Services that ([support managed identities][managed-identities]) for Azure resources | [`resource_id`](/vault/api-docs/auth/azure#resource_id) | `read` on the resource used to obtain the JWT |
[vmss-uniform]: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes#scale-sets-with-uniform-orchestration
[vmss-flex]: https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes#scale-sets-with-flexible-orchestration
[managed-identities]: https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/managed-identities-status
### API permissions
The following [API permissions](https://learn.microsoft.com/en-us/azure/active-directory/develop/permissions-consent-overview#types-of-permissions)
must be assigned to the service principal provided to Vault for managing the root rotation in Azure:
| Permission Name | Type |
| ----------------------------- | ----------- |
| Application.ReadWrite.All | Application |
## Authentication
### Via the CLI
The default path is `/auth/azure`. If this auth method was enabled at a different
path, specify `auth/my-path/login` instead.
```shell-session
$ vault write auth/azure/login \
role="dev-role" \
jwt="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." \
subscription_id="12345-..." \
resource_group_name="test-group" \
vm_name="test-vm"
```
The `role` and `jwt` parameters are required. When using
`bound_service_principal_ids` and `bound_group_ids` in the token roles, all the
information is required in the JWT (except for `vm_name`, `vmss_name`, `resource_id`). When
using other `bound_*` parameters, calls to Azure APIs will be made and
`subscription_id`, `resource_group_name`, and `vm_name`/`vmss_name` are all required
and can be obtained through instance metadata.
For example:
```shell-session
$ vault write auth/azure/login role="dev-role" \
jwt="$(curl -s 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fmanagement.azure.com%2F' -H Metadata:true | jq -r '.access_token')" \
subscription_id=$(curl -s -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01" | jq -r '.compute | .subscriptionId') \
resource_group_name=$(curl -s -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01" | jq -r '.compute | .resourceGroupName') \
vm_name=$(curl -s -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01" | jq -r '.compute | .name')
```
### Via the API
The default endpoint is `auth/azure/login`. If this auth method was enabled
at a different path, use that value instead of `azure`.
```shell-session
$ curl \
--request POST \
--data '{"role": "dev-role", "jwt": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."}' \
https://127.0.0.1:8200/v1/auth/azure/login
```
The response will contain the token at `auth.client_token`:
```json
{
"auth": {
"client_token": "f33f8c72-924e-11f8-cb43-ac59d697597c",
"accessor": "0e9e354a-520f-df04-6867-ee81cae3d42d",
"policies": ["default", "dev", "prod"],
"lease_duration": 2764800,
"renewable": true
}
}
```
## Configuration
Auth methods must be configured in advance before machines can authenticate.
These steps are usually completed by an operator or configuration management
tool.
### Via the CLI
1. Enable Azure authentication in Vault:
```shell-session
$ vault auth enable azure
```
1. Configure the Azure auth method:
```shell-session
$ vault write auth/azure/config \
tenant_id=7cd1f227-ca67-4fc6-a1a4-9888ea7f388c \
resource=https://management.azure.com/ \
client_id=dd794de4-4c6c-40b3-a930-d84cd32e9699 \
client_secret=IT3B2XfZvWnfB98s1cie8EMe7zWg483Xy8zY004=
```
For the complete list of configuration options, please see the API
documentation.
In some cases, you cannot set sensitive account credentials in your
Vault configuration. For example, your organization may require that all
security credentials are short-lived or explicitly tied to a machine identity.
To provide managed identity security credentials to Vault, we recommend using Vault
[plugin workload identity federation](#plugin-workload-identity-federation-wif)
(WIF) as shown below.
1. Alternatively, configure the audience claim value and the Client, Tenant IDs for plugin workload identity federation:
```text
$ vault write azure/config \
tenant_id=7cd1f227-ca67-4fc6-a1a4-9888ea7f388c \
client_id=dd794de4-4c6c-40b3-a930-d84cd32e9699 \
identity_token_audience=vault.example/v1/identity/oidc/plugins
```
The Vault identity token provider signs the plugin identity token JWT internally.
If a trust relationship exists between Vault and Azure through WIF, the auth
method can exchange the Vault identity token for a federated access token.
To configure a trusted relationship between Vault and Azure:
- You must configure the [identity token issuer backend](/vault/api-docs/secret/identity/tokens#configure-the-identity-tokens-backend)
for Vault.
- Azure must have a
[federated identity credential](https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation-create-trust?pivots=identity-wif-apps-methods-azp#configure-a-federated-identity-credential-on-an-app)
configured with information about the fully qualified and network-reachable
issuer URL for the Vault plugin
[identity token provider](/vault/api-docs/secret/identity/tokens#read-plugin-identity-well-known-configurations).
Establishing a trusted relationship between Vault and Azure ensures that Azure
can fetch JWKS
[public keys](/vault/api-docs/secret/identity/tokens#read-active-public-keys)
and verify the plugin identity token signature.
1. Create a role:
```shell-session
$ vault write auth/azure/role/dev-role \
policies="prod,dev" \
bound_subscription_ids=6a1d5988-5917-4221-b224-904cd7e24a25 \
bound_resource_groups=vault
```
Roles are associated with an authentication type/entity and a set of Vault
policies. Roles are configured with constraints specific to the
authentication type, as well as overall constraints and configuration for
the generated auth tokens.
For the complete list of role options, please see the [API documentation](/vault/api-docs/auth/azure).
### Via the API
1. Enable Azure authentication in Vault:
```shell-session
$ curl \
--header "X-Vault-Token: ..." \
--request POST \
--data '{"type": "azure"}' \
https://127.0.0.1:8200/v1/sys/auth/azure
```
1. Configure the Azure auth method:
```shell-session
$ curl \
--header "X-Vault-Token: ..." \
--request POST \
--data '{"tenant_id": "...", "resource": "..."}' \
https://127.0.0.1:8200/v1/auth/azure/config
```
1. Create a role:
```shell-session
$ curl \
--header "X-Vault-Token: ..." \
--request POST \
--data '{"policies": ["dev", "prod"], ...}' \
https://127.0.0.1:8200/v1/auth/azure/role/dev-role
```
## Azure managed identities
There are two types of [managed identities](https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types)
in Azure: System-assigned and User-assigned. System-assigned identities are unique to
every virtual machine in Azure. If the resources using Azure auth are recreated
frequently, using system-assigned identities could result in many Vault entities being
created. For environments with high ephemeral workloads, user-assigned identities are
recommended.
### Limitations
The TTL of the access token returned by Azure AD for a managed identity is
24hrs and is not configurable. See ([limitations of using managed identities][id-limitations])
for more info.
[id-limitations]: https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations#limitation-of-using-managed-identities-for-authorization
## Azure debug logs
The Azure auth plugin supports debug logging which includes additional information
about requests and responses from the Azure API.
To enable the Azure debug logs, set the following environment variable on the Vault
server:
```shell
AZURE_SDK_GO_LOGGING=all
```
## Plugin Workload Identity Federation (WIF)
<EnterpriseAlert product="vault" />
The Azure auth method supports the plugin WIF workflow, and has a source of identity called
a plugin identity token. A plugin identity token is a JWT that is signed internally by Vault's
[plugin identity token issuer](/vault/api-docs/secret/identity/tokens#read-plugin-workload-identity-issuer-s-openid-configuration).
If there is a trust relationship configured between Vault and Azure through
[workload identity federation](https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation),
the auth method can exchange its identity token for short-lived access tokens needed to
perform its actions.
Exchanging identity tokens for access tokens lets the Azure auth method
operate without configuring explicit access to sensitive client credentials.
To configure the auth method to use plugin WIF:
1. Ensure that Vault [openid-configuration](/vault/api-docs/secret/identity/tokens#read-plugin-identity-token-issuer-s-openid-configuration)
and [public JWKS](/vault/api-docs/secret/identity/tokens#read-plugin-identity-token-issuer-s-public-jwks)
APIs are network-reachable by Azure. We recommend using an API proxy or gateway
if you need to limit Vault API exposure.
1. Configure a
[federated identity credential](https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation-create-trust?pivots=identity-wif-apps-methods-azp#configure-a-federated-identity-credential-on-an-app)
on a dedicated application registration in Azure to establish a trust relationship with Vault.
1. The issuer URL **must** point at your [Vault plugin identity token issuer](/vault/api-docs/secret/identity/tokens#read-plugin-workload-identity-issuer-s-openid-configuration) with the
`/.well-known/openid-configuration` suffix removed. For example:
`https://host:port/v1/identity/oidc/plugins`.
1. The subject identifier **must** match the unique `sub` claim issued by plugin identity tokens.
The subject identifier should have the form `plugin-identity:<NAMESPACE>:auth:<AZURE_MOUNT_ACCESSOR>`.
1. The audience should be under 600 characters. The default value in Azure is `api://AzureADTokenExchange`.
1. Configure the Azure auth method with the client and tenant IDs and the OIDC audience value.
```shell-session
$ vault write azure/config \
tenant_id=7cd1f227-ca67-4fc6-a1a4-9888ea7f388c \
client_id=dd794de4-4c6c-40b3-a930-d84cd32e9699 \
identity_token_audience=vault.example/v1/identity/oidc/plugins
```
Your auth method can now use plugin WIF for its configuration credentials.
By default, WIF [credentials](https://learn.microsoft.com/en-us/entra/identity-platform/access-tokens#token-lifetime)
have a time-to-live of 1 hour and automatically refresh when they expire.
Please see the [API documentation](/vault/api-docs/auth/azure#configure)
for more details on the fields associated with plugin WIF.
## API
The Azure Auth Plugin has a full HTTP API. Please see the [API documentation](/vault/api-docs/auth/azure) for more details.
## Code example
The following example demonstrates the Azure auth method to authenticate
with Vault.
<CodeTabs>
<CodeBlockConfig>
```go
package main
import (
"context"
"fmt"
vault "github.com/hashicorp/vault/api"
auth "github.com/hashicorp/vault/api/auth/azure"
)
// Fetches a key-value secret (kv-v2) after authenticating to Vault via Azure authentication.
// This example assumes you have a configured Azure AD Application.
func getSecretWithAzureAuth() (string, error) {
config := vault.DefaultConfig() // modify for more granular configuration
client, err := vault.NewClient(config)
if err != nil {
return "", fmt.Errorf("unable to initialize Vault client: %w", err)
}
azureAuth, err := auth.NewAzureAuth(
"dev-role-azure",
)
if err != nil {
return "", fmt.Errorf("unable to initialize Azure auth method: %w", err)
}
authInfo, err := client.Auth().Login(context.Background(), azureAuth)
if err != nil {
return "", fmt.Errorf("unable to login to Azure auth method: %w", err)
}
if authInfo == nil {
return "", fmt.Errorf("no auth info was returned after login")
}
// get secret from the default mount path for KV v2 in dev mode, "secret"
secret, err := client.KVv2("secret").Get(context.Background(), "creds")
if err != nil {
return "", fmt.Errorf("unable to read secret: %w", err)
}
// data map can contain more than one key-value pair,
// in this case we're just grabbing one of them
value, ok := secret.Data["password"].(string)
if !ok {
return "", fmt.Errorf("value type assertion failed: %T %#v", secret.Data["password"], secret.Data["password"])
}
return value, nil
}
```
</CodeBlockConfig>
<CodeBlockConfig>
```cs
using System;
using System.Collections.Generic;
using System.IO;
using System.Net;
using System.Net.Http;
using System.Text;
using Newtonsoft.Json;
using VaultSharp;
using VaultSharp.V1.AuthMethods;
using VaultSharp.V1.AuthMethods.Azure;
using VaultSharp.V1.Commons;
namespace Examples
{
public class AzureAuthExample
{
public class InstanceMetadata
{
public string name { get; set; }
public string resourceGroupName { get; set; }
public string subscriptionId { get; set; }
}
const string MetadataEndPoint = "http://169.254.169.254/metadata/instance?api-version=2017-08-01";
const string AccessTokenEndPoint = "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/";
/// <summary>
/// Fetches a key-value secret (kv-v2) after authenticating to Vault via Azure authentication.
/// This example assumes you have a configured Azure AD Application.
/// </summary>
public string GetSecretWithAzureAuth()
{
string vaultAddr = Environment.GetEnvironmentVariable("VAULT_ADDR");
if(String.IsNullOrEmpty(vaultAddr))
{
throw new System.ArgumentNullException("Vault Address");
}
string roleName = Environment.GetEnvironmentVariable("VAULT_ROLE");
if(String.IsNullOrEmpty(roleName))
{
throw new System.ArgumentNullException("Vault Role Name");
}
string jwt = GetJWT();
InstanceMetadata metadata = GetMetadata();
IAuthMethodInfo authMethod = new AzureAuthMethodInfo(roleName: roleName, jwt: jwt, subscriptionId: metadata.subscriptionId, resourceGroupName: metadata.resourceGroupName, virtualMachineName: metadata.name);
var vaultClientSettings = new VaultClientSettings(vaultAddr, authMethod);
IVaultClient vaultClient = new VaultClient(vaultClientSettings);
// We can retrieve the secret from the VaultClient object
Secret<SecretData> kv2Secret = null;
kv2Secret = vaultClient.V1.Secrets.KeyValue.V2.ReadSecretAsync(path: "/creds").Result;
var password = kv2Secret.Data.Data["password"];
return password.ToString();
}
/// <summary>
/// Query Azure Resource Manage for metadata about the Azure instance
/// </summary>
private InstanceMetadata GetMetadata()
{
HttpWebRequest metadataRequest = (HttpWebRequest)WebRequest.Create(MetadataEndPoint);
metadataRequest.Headers["Metadata"] = "true";
metadataRequest.Method = "GET";
HttpWebResponse metadataResponse = (HttpWebResponse)metadataRequest.GetResponse();
StreamReader streamResponse = new StreamReader(metadataResponse.GetResponseStream());
string stringResponse = streamResponse.ReadToEnd();
var resultsDict = JsonConvert.DeserializeObject<Dictionary<string, InstanceMetadata>>(stringResponse);
return resultsDict["compute"];
}
/// <summary>
/// Query Azure Resource Manager (ARM) for an access token
/// </summary>
private string GetJWT()
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(AccessTokenEndPoint);
request.Headers["Metadata"] = "true";
request.Method = "GET";
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
// Pipe response Stream to a StreamReader and extract access token
StreamReader streamResponse = new StreamReader(response.GetResponseStream());
string stringResponse = streamResponse.ReadToEnd();
var resultsDict = JsonConvert.DeserializeObject<Dictionary<string, string>>(stringResponse);
return resultsDict["access_token"];
}
}
}
```
</CodeBlockConfig>
</CodeTabs> | vault | layout docs page title Azure Auth Methods description The azure auth method plugin allows automated authentication of Azure Active Directory Azure auth method The azure auth method allows authentication against Vault using Azure Active Directory credentials It treats Azure as a Trusted Third Party and expects a JSON Web Token JWT https tools ietf org html rfc7519 signed by Azure Active Directory for the configured tenant This method supports authentication for system assigned and user assigned managed identities See Managed identities for Azure resources https learn microsoft com en us azure active directory managed identities azure resources overview for more information about these resources This documentation assumes the Azure method is mounted at the auth azure path in Vault Since it is possible to enable auth methods at any location please update your API calls accordingly Prerequisites The Azure auth method requires client credentials to access Azure APIs The following are required to configure the auth method A configured Azure AD application https docs microsoft com en us azure active directory develop active directory integrating applications which is used as the resource for generating MSI access tokens Client credentials shared secret with read access to particular Azure Resource Manager resources See Azure AD Service to Service Client Credentials https docs microsoft com en us azure active directory develop active directory protocols oauth service to service If Vault is hosted on Azure Vault can use MSI to access Azure instead of a shared secret A managed identity must be enabled https learn microsoft com en us azure active directory managed identities azure resources on the resource that acquires the access token The following Azure role assignments https learn microsoft com en us azure role based access control overview role assignments must be granted to the Azure AD application in order for the auth method to access Azure APIs during authentication Role assignments Note The role assignments are only required when the vm name vault api docs auth azure vm name vmss name vault api docs auth azure vmss name or resource id vault api docs auth azure resource id parameters are used on login Azure Environment Login Parameter Azure API Permission Virtual Machine vm name vault api docs auth azure vm name Microsoft Compute virtualMachines read Virtual Machine Scale Set Uniform Orchestration vmss uniform vmss name vault api docs auth azure vmss name Microsoft Compute virtualMachineScaleSets read Virtual Machine Scale Set Flexible Orchestration vmss flex vmss name vault api docs auth azure vmss name Microsoft Compute virtualMachineScaleSets read Microsoft ManagedIdentity userAssignedIdentities read Services that support managed identities managed identities for Azure resources resource id vault api docs auth azure resource id read on the resource used to obtain the JWT vmss uniform https learn microsoft com en us azure virtual machine scale sets virtual machine scale sets orchestration modes scale sets with uniform orchestration vmss flex https learn microsoft com en us azure virtual machine scale sets virtual machine scale sets orchestration modes scale sets with flexible orchestration managed identities https learn microsoft com en us azure active directory managed identities azure resources managed identities status API permissions The following API permissions https learn microsoft com en us azure active directory develop permissions consent overview types of permissions must be assigned to the service principal provided to Vault for managing the root rotation in Azure Permission Name Type Application ReadWrite All Application Authentication Via the CLI The default path is auth azure If this auth method was enabled at a different path specify auth my path login instead shell session vault write auth azure login role dev role jwt eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9 subscription id 12345 resource group name test group vm name test vm The role and jwt parameters are required When using bound service principal ids and bound group ids in the token roles all the information is required in the JWT except for vm name vmss name resource id When using other bound parameters calls to Azure APIs will be made and subscription id resource group name and vm name vmss name are all required and can be obtained through instance metadata For example shell session vault write auth azure login role dev role jwt curl s http 169 254 169 254 metadata identity oauth2 token api version 2018 02 01 resource https 3A 2F 2Fmanagement azure com 2F H Metadata true jq r access token subscription id curl s H Metadata true http 169 254 169 254 metadata instance api version 2017 08 01 jq r compute subscriptionId resource group name curl s H Metadata true http 169 254 169 254 metadata instance api version 2017 08 01 jq r compute resourceGroupName vm name curl s H Metadata true http 169 254 169 254 metadata instance api version 2017 08 01 jq r compute name Via the API The default endpoint is auth azure login If this auth method was enabled at a different path use that value instead of azure shell session curl request POST data role dev role jwt eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9 https 127 0 0 1 8200 v1 auth azure login The response will contain the token at auth client token json auth client token f33f8c72 924e 11f8 cb43 ac59d697597c accessor 0e9e354a 520f df04 6867 ee81cae3d42d policies default dev prod lease duration 2764800 renewable true Configuration Auth methods must be configured in advance before machines can authenticate These steps are usually completed by an operator or configuration management tool Via the CLI 1 Enable Azure authentication in Vault shell session vault auth enable azure 1 Configure the Azure auth method shell session vault write auth azure config tenant id 7cd1f227 ca67 4fc6 a1a4 9888ea7f388c resource https management azure com client id dd794de4 4c6c 40b3 a930 d84cd32e9699 client secret IT3B2XfZvWnfB98s1cie8EMe7zWg483Xy8zY004 For the complete list of configuration options please see the API documentation In some cases you cannot set sensitive account credentials in your Vault configuration For example your organization may require that all security credentials are short lived or explicitly tied to a machine identity To provide managed identity security credentials to Vault we recommend using Vault plugin workload identity federation plugin workload identity federation wif WIF as shown below 1 Alternatively configure the audience claim value and the Client Tenant IDs for plugin workload identity federation text vault write azure config tenant id 7cd1f227 ca67 4fc6 a1a4 9888ea7f388c client id dd794de4 4c6c 40b3 a930 d84cd32e9699 identity token audience vault example v1 identity oidc plugins The Vault identity token provider signs the plugin identity token JWT internally If a trust relationship exists between Vault and Azure through WIF the auth method can exchange the Vault identity token for a federated access token To configure a trusted relationship between Vault and Azure You must configure the identity token issuer backend vault api docs secret identity tokens configure the identity tokens backend for Vault Azure must have a federated identity credential https learn microsoft com en us entra workload id workload identity federation create trust pivots identity wif apps methods azp configure a federated identity credential on an app configured with information about the fully qualified and network reachable issuer URL for the Vault plugin identity token provider vault api docs secret identity tokens read plugin identity well known configurations Establishing a trusted relationship between Vault and Azure ensures that Azure can fetch JWKS public keys vault api docs secret identity tokens read active public keys and verify the plugin identity token signature 1 Create a role shell session vault write auth azure role dev role policies prod dev bound subscription ids 6a1d5988 5917 4221 b224 904cd7e24a25 bound resource groups vault Roles are associated with an authentication type entity and a set of Vault policies Roles are configured with constraints specific to the authentication type as well as overall constraints and configuration for the generated auth tokens For the complete list of role options please see the API documentation vault api docs auth azure Via the API 1 Enable Azure authentication in Vault shell session curl header X Vault Token request POST data type azure https 127 0 0 1 8200 v1 sys auth azure 1 Configure the Azure auth method shell session curl header X Vault Token request POST data tenant id resource https 127 0 0 1 8200 v1 auth azure config 1 Create a role shell session curl header X Vault Token request POST data policies dev prod https 127 0 0 1 8200 v1 auth azure role dev role Azure managed identities There are two types of managed identities https learn microsoft com en us azure active directory managed identities azure resources overview managed identity types in Azure System assigned and User assigned System assigned identities are unique to every virtual machine in Azure If the resources using Azure auth are recreated frequently using system assigned identities could result in many Vault entities being created For environments with high ephemeral workloads user assigned identities are recommended Limitations The TTL of the access token returned by Azure AD for a managed identity is 24hrs and is not configurable See limitations of using managed identities id limitations for more info id limitations https learn microsoft com en us azure active directory managed identities azure resources managed identity best practice recommendations limitation of using managed identities for authorization Azure debug logs The Azure auth plugin supports debug logging which includes additional information about requests and responses from the Azure API To enable the Azure debug logs set the following environment variable on the Vault server shell AZURE SDK GO LOGGING all Plugin Workload Identity Federation WIF EnterpriseAlert product vault The Azure auth method supports the plugin WIF workflow and has a source of identity called a plugin identity token A plugin identity token is a JWT that is signed internally by Vault s plugin identity token issuer vault api docs secret identity tokens read plugin workload identity issuer s openid configuration If there is a trust relationship configured between Vault and Azure through workload identity federation https learn microsoft com en us entra workload id workload identity federation the auth method can exchange its identity token for short lived access tokens needed to perform its actions Exchanging identity tokens for access tokens lets the Azure auth method operate without configuring explicit access to sensitive client credentials To configure the auth method to use plugin WIF 1 Ensure that Vault openid configuration vault api docs secret identity tokens read plugin identity token issuer s openid configuration and public JWKS vault api docs secret identity tokens read plugin identity token issuer s public jwks APIs are network reachable by Azure We recommend using an API proxy or gateway if you need to limit Vault API exposure 1 Configure a federated identity credential https learn microsoft com en us entra workload id workload identity federation create trust pivots identity wif apps methods azp configure a federated identity credential on an app on a dedicated application registration in Azure to establish a trust relationship with Vault 1 The issuer URL must point at your Vault plugin identity token issuer vault api docs secret identity tokens read plugin workload identity issuer s openid configuration with the well known openid configuration suffix removed For example https host port v1 identity oidc plugins 1 The subject identifier must match the unique sub claim issued by plugin identity tokens The subject identifier should have the form plugin identity NAMESPACE auth AZURE MOUNT ACCESSOR 1 The audience should be under 600 characters The default value in Azure is api AzureADTokenExchange 1 Configure the Azure auth method with the client and tenant IDs and the OIDC audience value shell session vault write azure config tenant id 7cd1f227 ca67 4fc6 a1a4 9888ea7f388c client id dd794de4 4c6c 40b3 a930 d84cd32e9699 identity token audience vault example v1 identity oidc plugins Your auth method can now use plugin WIF for its configuration credentials By default WIF credentials https learn microsoft com en us entra identity platform access tokens token lifetime have a time to live of 1 hour and automatically refresh when they expire Please see the API documentation vault api docs auth azure configure for more details on the fields associated with plugin WIF API The Azure Auth Plugin has a full HTTP API Please see the API documentation vault api docs auth azure for more details Code example The following example demonstrates the Azure auth method to authenticate with Vault CodeTabs CodeBlockConfig go package main import context fmt vault github com hashicorp vault api auth github com hashicorp vault api auth azure Fetches a key value secret kv v2 after authenticating to Vault via Azure authentication This example assumes you have a configured Azure AD Application func getSecretWithAzureAuth string error config vault DefaultConfig modify for more granular configuration client err vault NewClient config if err nil return fmt Errorf unable to initialize Vault client w err azureAuth err auth NewAzureAuth dev role azure if err nil return fmt Errorf unable to initialize Azure auth method w err authInfo err client Auth Login context Background azureAuth if err nil return fmt Errorf unable to login to Azure auth method w err if authInfo nil return fmt Errorf no auth info was returned after login get secret from the default mount path for KV v2 in dev mode secret secret err client KVv2 secret Get context Background creds if err nil return fmt Errorf unable to read secret w err data map can contain more than one key value pair in this case we re just grabbing one of them value ok secret Data password string if ok return fmt Errorf value type assertion failed T v secret Data password secret Data password return value nil CodeBlockConfig CodeBlockConfig cs using System using System Collections Generic using System IO using System Net using System Net Http using System Text using Newtonsoft Json using VaultSharp using VaultSharp V1 AuthMethods using VaultSharp V1 AuthMethods Azure using VaultSharp V1 Commons namespace Examples public class AzureAuthExample public class InstanceMetadata public string name get set public string resourceGroupName get set public string subscriptionId get set const string MetadataEndPoint http 169 254 169 254 metadata instance api version 2017 08 01 const string AccessTokenEndPoint http 169 254 169 254 metadata identity oauth2 token api version 2018 02 01 resource https management azure com summary Fetches a key value secret kv v2 after authenticating to Vault via Azure authentication This example assumes you have a configured Azure AD Application summary public string GetSecretWithAzureAuth string vaultAddr Environment GetEnvironmentVariable VAULT ADDR if String IsNullOrEmpty vaultAddr throw new System ArgumentNullException Vault Address string roleName Environment GetEnvironmentVariable VAULT ROLE if String IsNullOrEmpty roleName throw new System ArgumentNullException Vault Role Name string jwt GetJWT InstanceMetadata metadata GetMetadata IAuthMethodInfo authMethod new AzureAuthMethodInfo roleName roleName jwt jwt subscriptionId metadata subscriptionId resourceGroupName metadata resourceGroupName virtualMachineName metadata name var vaultClientSettings new VaultClientSettings vaultAddr authMethod IVaultClient vaultClient new VaultClient vaultClientSettings We can retrieve the secret from the VaultClient object Secret SecretData kv2Secret null kv2Secret vaultClient V1 Secrets KeyValue V2 ReadSecretAsync path creds Result var password kv2Secret Data Data password return password ToString summary Query Azure Resource Manage for metadata about the Azure instance summary private InstanceMetadata GetMetadata HttpWebRequest metadataRequest HttpWebRequest WebRequest Create MetadataEndPoint metadataRequest Headers Metadata true metadataRequest Method GET HttpWebResponse metadataResponse HttpWebResponse metadataRequest GetResponse StreamReader streamResponse new StreamReader metadataResponse GetResponseStream string stringResponse streamResponse ReadToEnd var resultsDict JsonConvert DeserializeObject Dictionary string InstanceMetadata stringResponse return resultsDict compute summary Query Azure Resource Manager ARM for an access token summary private string GetJWT HttpWebRequest request HttpWebRequest WebRequest Create AccessTokenEndPoint request Headers Metadata true request Method GET HttpWebResponse response HttpWebResponse request GetResponse Pipe response Stream to a StreamReader and extract access token StreamReader streamResponse new StreamReader response GetResponseStream string stringResponse streamResponse ReadToEnd var resultsDict JsonConvert DeserializeObject Dictionary string string stringResponse return resultsDict access token CodeBlockConfig CodeTabs |
vault The Okta auth method allows users to authenticate with Vault using Okta credentials Okta auth method layout docs page title Okta Auth Methods | ---
layout: docs
page_title: Okta - Auth Methods
description: |-
The Okta auth method allows users to authenticate with Vault using Okta
credentials.
---
# Okta auth method
The `okta` auth method allows authentication using Okta and user/password
credentials. This allows Vault to be integrated into environments using Okta.
The mapping of groups in Okta to Vault policies is managed by using the
[users](/vault/api-docs/auth/okta#register-user) and [groups](/vault/api-docs/auth/okta#register-group)
APIs.
## Authentication
### Via the CLI
The default path is `/okta`. If this auth method was enabled at a different
path, specify `-path=/my-path` in the CLI.
```shell-session
$ vault login -method=okta username=my-username
```
### Via the API
The default endpoint is `auth/okta/login`. If this auth method was enabled
at a different path, use that value instead of `okta`.
```shell-session
$ curl \
--request POST \
--data '{"password": "MY_PASSWORD"}' \
http://127.0.0.1:8200/v1/auth/okta/login/my-username
```
The response will contain a token at `auth.client_token`:
```json
{
"auth": {
"client_token": "abcd1234-7890...",
"policies": ["admins"],
"metadata": {
"username": "mitchellh"
}
}
}
```
### MFA
Okta Verify Push and TOTP MFA methods, and Google TOTP are supported during login. For TOTP, the current
passcode may be provided via the `totp` parameter:
```shell-session
$ vault login -method=okta username=my-username totp=123456
```
If both Okta TOTP and Google TOTP are enabled in your Okta account, make sure to pass in
the `provider` name to which the `totp` code belong.
```shell-session
$ vault login -method=okta username=my-username totp=123456 provider=GOOGLE
```
If `totp` is not set and MFA Push is configured in Okta, a Push will be sent during login.
The auth method uses the Okta [Authentication API](https://developer.okta.com/docs/reference/api/authn/).
It does not manage Okta [sessions](https://developer.okta.com/docs/reference/api/sessions/) for authenticated
users. This means that if MFA Push is configured, it will be required during both login and token renewal.
Note that this MFA support is integrated with Okta Auth and is limited strictly to login
operations. It is not related to [Enterprise MFA](/vault/docs/enterprise/mfa).
## Configuration
Auth methods must be configured in advance before users or machines can
authenticate. These steps are usually completed by an operator or configuration
management tool.
### Via the CLI
1. Enable the Okta auth method:
```shell-session
$ vault auth enable okta
```
1. Configure Vault to communicate with your Okta account:
```shell-session
$ vault write auth/okta/config \
base_url="okta.com" \
org_name="dev-123456" \
api_token="00abcxyz..."
```
-> **Note**: Support for okta auth with no API token is deprecated in Vault 1.4.
If no token is supplied, Vault will function, but only locally configured
group membership will be available. Without a token, groups will not be
queried.
For the complete list of configuration options, please see the
[API documentation](/vault/api-docs/auth/okta).
1. Map an Okta group to a Vault policy:
```shell-session
$ vault write auth/okta/groups/scientists policies=nuclear-reactor
```
In this example, anyone who successfully authenticates via Okta who is a
member of the "scientists" group will receive a Vault token with the
"nuclear-reactor" policy attached.
1. It is also possible to add users directly:
```shell-session
$ vault write auth/okta/groups/engineers policies=autopilot
$ vault write auth/okta/users/tesla groups=engineers
```
This adds the Okta user "tesla" to the "engineers" group, which maps to
the "autopilot" Vault policy.
-> **Note**: The user-policy mapping via group membership happens at token _creation
time_. Any changes in group membership in Okta will not affect existing
tokens that have already been provisioned. To see these changes, users
will need to re-authenticate. You can force this by revoking the
existing tokens.
### Okta API token permissions
The `okta` auth method uses the [Authentication](https://developer.okta.com/docs/reference/api/authn/)
and [User Groups](https://developer.okta.com/docs/reference/api/users/#get-user-s-groups)
APIs to authenticate users and obtain their group membership. The [`api_token`](/vault/api-docs/auth/okta#api_token)
provided to the auth method's configuration must have sufficient privileges to exercise
these Okta APIs.
It is recommended to configure the auth method with a minimally permissive API token.
To do so, create the API token using an administrator with the standard
[Read-only Admin](https://help.okta.com/en/prod/Content/Topics/Security/administrators-read-only-admin.htm)
role. Custom roles may also be used to grant minimal permissions to the Okta API token.
## API
The Okta auth method has a full HTTP API. Please see the
[Okta Auth API](/vault/api-docs/auth/okta) for more details. | vault | layout docs page title Okta Auth Methods description The Okta auth method allows users to authenticate with Vault using Okta credentials Okta auth method The okta auth method allows authentication using Okta and user password credentials This allows Vault to be integrated into environments using Okta The mapping of groups in Okta to Vault policies is managed by using the users vault api docs auth okta register user and groups vault api docs auth okta register group APIs Authentication Via the CLI The default path is okta If this auth method was enabled at a different path specify path my path in the CLI shell session vault login method okta username my username Via the API The default endpoint is auth okta login If this auth method was enabled at a different path use that value instead of okta shell session curl request POST data password MY PASSWORD http 127 0 0 1 8200 v1 auth okta login my username The response will contain a token at auth client token json auth client token abcd1234 7890 policies admins metadata username mitchellh MFA Okta Verify Push and TOTP MFA methods and Google TOTP are supported during login For TOTP the current passcode may be provided via the totp parameter shell session vault login method okta username my username totp 123456 If both Okta TOTP and Google TOTP are enabled in your Okta account make sure to pass in the provider name to which the totp code belong shell session vault login method okta username my username totp 123456 provider GOOGLE If totp is not set and MFA Push is configured in Okta a Push will be sent during login The auth method uses the Okta Authentication API https developer okta com docs reference api authn It does not manage Okta sessions https developer okta com docs reference api sessions for authenticated users This means that if MFA Push is configured it will be required during both login and token renewal Note that this MFA support is integrated with Okta Auth and is limited strictly to login operations It is not related to Enterprise MFA vault docs enterprise mfa Configuration Auth methods must be configured in advance before users or machines can authenticate These steps are usually completed by an operator or configuration management tool Via the CLI 1 Enable the Okta auth method shell session vault auth enable okta 1 Configure Vault to communicate with your Okta account shell session vault write auth okta config base url okta com org name dev 123456 api token 00abcxyz Note Support for okta auth with no API token is deprecated in Vault 1 4 If no token is supplied Vault will function but only locally configured group membership will be available Without a token groups will not be queried For the complete list of configuration options please see the API documentation vault api docs auth okta 1 Map an Okta group to a Vault policy shell session vault write auth okta groups scientists policies nuclear reactor In this example anyone who successfully authenticates via Okta who is a member of the scientists group will receive a Vault token with the nuclear reactor policy attached 1 It is also possible to add users directly shell session vault write auth okta groups engineers policies autopilot vault write auth okta users tesla groups engineers This adds the Okta user tesla to the engineers group which maps to the autopilot Vault policy Note The user policy mapping via group membership happens at token creation time Any changes in group membership in Okta will not affect existing tokens that have already been provisioned To see these changes users will need to re authenticate You can force this by revoking the existing tokens Okta API token permissions The okta auth method uses the Authentication https developer okta com docs reference api authn and User Groups https developer okta com docs reference api users get user s groups APIs to authenticate users and obtain their group membership The api token vault api docs auth okta api token provided to the auth method s configuration must have sufficient privileges to exercise these Okta APIs It is recommended to configure the auth method with a minimally permissive API token To do so create the API token using an administrator with the standard Read only Admin https help okta com en prod Content Topics Security administrators read only admin htm role Custom roles may also be used to grant minimal permissions to the Okta API token API The Okta auth method has a full HTTP API Please see the Okta Auth API vault api docs auth okta for more details |
vault credentials LDAP auth method layout docs page title LDAP Auth Methods The ldap auth method allows users to authenticate with Vault using LDAP | ---
layout: docs
page_title: LDAP - Auth Methods
description: |-
The "ldap" auth method allows users to authenticate with Vault using LDAP
credentials.
---
# LDAP auth method
@include 'x509-sha1-deprecation.mdx'
The `ldap` auth method allows authentication using an existing LDAP
server and user/password credentials. This allows Vault to be integrated
into environments using LDAP without duplicating the user/pass configuration
in multiple places.
The mapping of groups and users in LDAP to Vault policies is managed by using
the `users/` and `groups/` paths.
## A note on escaping
**It is up to the administrator** to provide properly escaped DNs. This
includes the user DN, bind DN for search, and so on.
The only DN escaping performed by this method is on usernames given at login
time when they are inserted into the final bind DN, and uses escaping rules
defined in RFC 4514.
Additionally, Active Directory has escaping rules that differ slightly from the
RFC; in particular it requires escaping of '#' regardless of position in the DN
(the RFC only requires it to be escaped when it is the first character), and
'=', which the RFC indicates can be escaped with a backslash, but does not
contain in its set of required escapes. If you are using Active Directory and
these appear in your usernames, please ensure that they are escaped, in
addition to being properly escaped in your configured DNs.
For reference, see [RFC 4514](https://www.ietf.org/rfc/rfc4514.txt) and this
[TechNet post on characters to escape in Active
Directory](http://social.technet.microsoft.com/wiki/contents/articles/5312.active-directory-characters-to-escape.aspx).
## Authentication
### Via the CLI
```shell-session
$ vault login -method=ldap username=mitchellh
Password (will be hidden):
Successfully authenticated! The policies that are associated
with this token are listed below:
admins
```
### Via the API
```shell-session
$ curl \
--request POST \
--data '{"password": "foo"}' \
http://127.0.0.1:8200/v1/auth/ldap/login/mitchellh
```
The response will be in JSON. For example:
```javascript
{
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": null,
"auth": {
"client_token": "c4f280f6-fdb2-18eb-89d3-589e2e834cdb",
"policies": [
"admins"
],
"metadata": {
"username": "mitchellh"
},
"lease_duration": 0,
"renewable": false
}
}
```
## Configuration
Auth methods must be configured in advance before users or machines can
authenticate. These steps are usually completed by an operator or configuration
management tool.
1. Enable the ldap auth method:
```text
$ vault auth enable ldap
```
1. Configure connection details for your LDAP server, information on how to
authenticate users, and instructions on how to query for group membership. The
configuration options are categorized and detailed below.
### Connection parameters
- `url` (string, required) - The LDAP server to connect to. Examples: `ldap://ldap.myorg.com`, `ldaps://ldap.myorg.com:636`. This can also be a comma-delineated list of URLs, e.g. `ldap://ldap.myorg.com,ldaps://ldap.myorg.com:636`, in which case the servers will be tried in-order if there are errors during the connection process.
- `starttls` (bool, optional) - If true, issues a `StartTLS` command after establishing an unencrypted connection.
- `insecure_tls` - (bool, optional) - If true, skips LDAP server SSL certificate verification - insecure, use with caution!
- `certificate` - (string, optional) - CA certificate to use when verifying LDAP server certificate, must be x509 PEM encoded.
- `client_tls_cert` - (string, optional) - Client certificate to provide to the LDAP server, must be x509 PEM encoded.
- `client_tls_key` - (string, optional) - Client certificate key to provide to the LDAP server, must be x509 PEM encoded.
### Binding parameters
There are two alternate methods of resolving the user object used to authenticate the end user: _Search_ or _User Principal Name_. When using _Search_, the bind can be either anonymous or authenticated. User Principal Name is a method of specifying users supported by Active Directory. More information on UPN can be found [here](<https://msdn.microsoft.com/en-us/library/ms677605(v=vs.85).aspx#userPrincipalName>).
`userfilter` works with both authenticated and anonymous _Search_.
In order for `userfilter` to apply for authenticated searches, `binddn` and `bindpass` must be set.
For anonymous search, `discoverdn` must be set to `true`, and `deny_null_bind` must be set to false.
#### Binding - authenticated search
- `binddn` (string, optional) - Distinguished name of object to bind when performing user and group search. Example: `cn=vault,ou=Users,dc=example,dc=com`
- `bindpass` (string, optional) - Password to use along with `binddn` when performing user search.
- `userdn` (string, optional) - Base DN under which to perform user search. Example: `ou=Users,dc=example,dc=com`
- `userattr` (string, optional) - Attribute on user attribute object matching the username passed when authenticating. Examples: `sAMAccountName`, `cn`, `uid`
- `userfilter` (string, optional) - Go template used to construct a ldap user search filter. The template can access the following context variables: \[`UserAttr`, `Username`\]. The default userfilter is `(=)` or `(userPrincipalName=@UPNDomain)` if the `upndomain` parameter is set. The user search filter can be used to restrict what user can attempt to log in. For example, to limit login to users that are not contractors, you could write `(&(objectClass=user)(=)(!(employeeType=Contractor)))`.
@include 'ldap-auth-userfilter-warning.mdx'
#### Binding - anonymous search
- `discoverdn` (bool, optional) - If true, use anonymous bind to discover the bind DN of a user
- `userdn` (string, optional) - Base DN under which to perform user search. Example: `ou=Users,dc=example,dc=com`
- `userattr` (string, optional) - Attribute on user attribute object matching the username passed when authenticating. Examples: `sAMAccountName`, `cn`, `uid`
- `userfilter` (string, optional) - Go template used to construct a ldap user search filter. The template can access the following context variables: \[`UserAttr`, `Username`\]. The default userfilter is `(=)` or `(userPrincipalName=@UPNDomain)` if the `upndomain` parameter is set. The user search filter can be used to restrict what user can attempt to log in. For example, to limit login to users that are not contractors, you could write `(&(objectClass=user)(=)(!(employeeType=Contractor)))`.
- `deny_null_bind` (bool, optional) - This option prevents users from bypassing authentication when providing an empty password. The default is `true`.
- `anonymous_group_search` (bool, optional) - Use anonymous binds when performing LDAP group searches. Defaults to `false`.
@include 'ldap-auth-userfilter-warning.mdx'
#### Alias dereferencing
- `dereference_aliases` (string, optional) - Control how aliases are dereferenced when performing the search. Possible values are: `never`, `finding`, `searching`, and `always`. `finding` will only dereference aliases during name resolution of the base. `searching` will dereference aliases after name resolution.
#### Binding - user principal name (AD)
- `upndomain` (string, optional) - userPrincipalDomain used to construct the UPN string for the authenticating user. The constructed UPN will appear as `[username]@UPNDomain`. Example: `example.com`, which will cause vault to bind as `[email protected]`.
### Group membership resolution
Once a user has been authenticated, the LDAP auth method must know how to resolve which groups the user is a member of. The configuration for this can vary depending on your LDAP server and your directory schema. There are two main strategies when resolving group membership - the first is searching for the authenticated user object and following an attribute to groups it is a member of. The second is to search for group objects of which the authenticated user is a member of. Both methods are supported.
- `groupfilter` (string, optional) - Go template used when constructing the group membership query. The template can access the following context variables: \[`UserDN`, `Username`\]. The default is `(|(memberUid=)(member=)(uniqueMember=))`, which is compatible with several common directory schemas. To support nested group resolution for Active Directory, instead use the following query: `(&(objectClass=group)(member:1.2.840.113556.1.4.1941:=))`.
- `groupdn` (string, required) - LDAP search base to use for group membership search. This can be the root containing either groups or users. Example: `ou=Groups,dc=example,dc=com`
- `groupattr` (string, optional) - LDAP attribute to follow on objects returned by `groupfilter` in order to enumerate user group membership. Examples: for groupfilter queries returning _group_ objects, use: `cn`. For queries returning _user_ objects, use: `memberOf`. The default is `cn`.
_Note_: When using _Authenticated Search_ for binding parameters (see above) the distinguished name defined for `binddn` is used for the group search. Otherwise, the authenticating user is used to perform the group search.
Use `vault path-help` for more details.
### Other
- `username_as_alias` (bool, optional) - If set to true, forces the auth method to use the username passed by the user as the alias name.
- `max_page_size` (int, optional) - If set to a value greater than 0, the LDAP backend will use the LDAP server's paged search control to request pages of up to the given size. This can be used to avoid hitting the LDAP server's maximum result size limit. Otherwise, the LDAP backend will not use the paged search control.
## Examples:
### Scenario 1
- LDAP server running on `ldap.example.com`, port 389.
- Server supports `STARTTLS` command to initiate encryption on the standard port.
- CA Certificate stored in file named `ldap_ca_cert.pem`
- Server is Active Directory supporting the userPrincipalName attribute. Users are identified as `[email protected]`.
- Groups are nested, we will use `LDAP_MATCHING_RULE_IN_CHAIN` to walk the ancestry graph.
- Group search will start under `ou=Groups,dc=example,dc=com`. For all group objects under that path, the `member` attribute will be checked for a match against the authenticated user.
- Group names are identified using their `cn` attribute.
```shell-session
$ vault write auth/ldap/config \
url="ldap://ldap.example.com" \
userdn="ou=Users,dc=example,dc=com" \
groupdn="ou=Groups,dc=example,dc=com" \
groupfilter="(&(objectClass=group)(member:1.2.840.113556.1.4.1941:=))" \
groupattr="cn" \
upndomain="example.com" \
certificate=@ldap_ca_cert.pem \
insecure_tls=false \
starttls=true
...
```
### Scenario 2
- LDAP server running on `ldap.example.com`, port 389.
- Server supports `STARTTLS` command to initiate encryption on the standard port.
- CA Certificate stored in file named `ldap_ca_cert.pem`
- Server does not allow anonymous binds for performing user search.
- Bind account used for searching is `cn=vault,ou=users,dc=example,dc=com` with password `My$ecrt3tP4ss`.
- User objects are under the `ou=Users,dc=example,dc=com` organizational unit.
- Username passed to vault when authenticating maps to the `sAMAccountName` attribute.
- Group membership will be resolved via the `memberOf` attribute of _user_ objects. That search will begin under `ou=Users,dc=example,dc=com`.
```shell-session
$ vault write auth/ldap/config \
url="ldap://ldap.example.com" \
userattr=sAMAccountName \
userdn="ou=Users,dc=example,dc=com" \
groupdn="ou=Users,dc=example,dc=com" \
groupfilter="(&(objectClass=person)(uid=))" \
groupattr="memberOf" \
binddn="cn=vault,ou=users,dc=example,dc=com" \
bindpass='My$ecrt3tP4ss' \
certificate=@ldap_ca_cert.pem \
insecure_tls=false \
starttls=true
...
```
### Scenario 3
- LDAP server running on `ldap.example.com`, port 636 (LDAPS)
- CA Certificate stored in file named `ldap_ca_cert.pem`
- User objects are under the `ou=Users,dc=example,dc=com` organizational unit.
- Username passed to vault when authenticating maps to the `uid` attribute.
- User bind DN will be auto-discovered using anonymous binding.
- Group membership will be resolved via any one of `memberUid`, `member`, or `uniqueMember` attributes. That search will begin under `ou=Groups,dc=example,dc=com`.
- Group names are identified using the `cn` attribute.
```shell-session
$ vault write auth/ldap/config \
url="ldaps://ldap.example.com" \
userattr="uid" \
userdn="ou=Users,dc=example,dc=com" \
discoverdn=true \
groupdn="ou=Groups,dc=example,dc=com" \
certificate=@ldap_ca_cert.pem \
insecure_tls=false \
starttls=true
...
```
## LDAP group -> policy mapping
Next we want to create a mapping from an LDAP group to a Vault policy:
```shell-session
$ vault write auth/ldap/groups/scientists policies=foo,bar
```
This maps the LDAP group "scientists" to the "foo" and "bar" Vault policies.
We can also add specific LDAP users to additional (potentially non-LDAP)
groups. Note that policies can also be specified on LDAP users as well.
```shell-session
$ vault write auth/ldap/groups/engineers policies=foobar
$ vault write auth/ldap/users/tesla groups=engineers policies=zoobar
```
This adds the LDAP user "tesla" to the "engineers" group, which maps to
the "foobar" Vault policy. User "tesla" itself is associated with "zoobar"
policy.
Finally, we can test this by authenticating:
```shell-session
$ vault login -method=ldap username=tesla
Password (will be hidden):
Successfully authenticated! The policies that are associated
with this token are listed below:
default, foobar, zoobar
```
## Note on policy mapping
It should be noted that user -> policy mapping happens at token creation time. And changes in group membership on the LDAP server will not affect tokens that have already been provisioned. To see these changes, old tokens should be revoked and the user should be asked to reauthenticate.
## User lockout
@include 'user-lockout.mdx'
## API
The LDAP auth method has a full HTTP API. Please see the
[LDAP auth method API](/vault/api-docs/auth/ldap) for more
details. | vault | layout docs page title LDAP Auth Methods description The ldap auth method allows users to authenticate with Vault using LDAP credentials LDAP auth method include x509 sha1 deprecation mdx The ldap auth method allows authentication using an existing LDAP server and user password credentials This allows Vault to be integrated into environments using LDAP without duplicating the user pass configuration in multiple places The mapping of groups and users in LDAP to Vault policies is managed by using the users and groups paths A note on escaping It is up to the administrator to provide properly escaped DNs This includes the user DN bind DN for search and so on The only DN escaping performed by this method is on usernames given at login time when they are inserted into the final bind DN and uses escaping rules defined in RFC 4514 Additionally Active Directory has escaping rules that differ slightly from the RFC in particular it requires escaping of regardless of position in the DN the RFC only requires it to be escaped when it is the first character and which the RFC indicates can be escaped with a backslash but does not contain in its set of required escapes If you are using Active Directory and these appear in your usernames please ensure that they are escaped in addition to being properly escaped in your configured DNs For reference see RFC 4514 https www ietf org rfc rfc4514 txt and this TechNet post on characters to escape in Active Directory http social technet microsoft com wiki contents articles 5312 active directory characters to escape aspx Authentication Via the CLI shell session vault login method ldap username mitchellh Password will be hidden Successfully authenticated The policies that are associated with this token are listed below admins Via the API shell session curl request POST data password foo http 127 0 0 1 8200 v1 auth ldap login mitchellh The response will be in JSON For example javascript lease id renewable false lease duration 0 data null auth client token c4f280f6 fdb2 18eb 89d3 589e2e834cdb policies admins metadata username mitchellh lease duration 0 renewable false Configuration Auth methods must be configured in advance before users or machines can authenticate These steps are usually completed by an operator or configuration management tool 1 Enable the ldap auth method text vault auth enable ldap 1 Configure connection details for your LDAP server information on how to authenticate users and instructions on how to query for group membership The configuration options are categorized and detailed below Connection parameters url string required The LDAP server to connect to Examples ldap ldap myorg com ldaps ldap myorg com 636 This can also be a comma delineated list of URLs e g ldap ldap myorg com ldaps ldap myorg com 636 in which case the servers will be tried in order if there are errors during the connection process starttls bool optional If true issues a StartTLS command after establishing an unencrypted connection insecure tls bool optional If true skips LDAP server SSL certificate verification insecure use with caution certificate string optional CA certificate to use when verifying LDAP server certificate must be x509 PEM encoded client tls cert string optional Client certificate to provide to the LDAP server must be x509 PEM encoded client tls key string optional Client certificate key to provide to the LDAP server must be x509 PEM encoded Binding parameters There are two alternate methods of resolving the user object used to authenticate the end user Search or User Principal Name When using Search the bind can be either anonymous or authenticated User Principal Name is a method of specifying users supported by Active Directory More information on UPN can be found here https msdn microsoft com en us library ms677605 v vs 85 aspx userPrincipalName userfilter works with both authenticated and anonymous Search In order for userfilter to apply for authenticated searches binddn and bindpass must be set For anonymous search discoverdn must be set to true and deny null bind must be set to false Binding authenticated search binddn string optional Distinguished name of object to bind when performing user and group search Example cn vault ou Users dc example dc com bindpass string optional Password to use along with binddn when performing user search userdn string optional Base DN under which to perform user search Example ou Users dc example dc com userattr string optional Attribute on user attribute object matching the username passed when authenticating Examples sAMAccountName cn uid userfilter string optional Go template used to construct a ldap user search filter The template can access the following context variables UserAttr Username The default userfilter is or userPrincipalName UPNDomain if the upndomain parameter is set The user search filter can be used to restrict what user can attempt to log in For example to limit login to users that are not contractors you could write objectClass user employeeType Contractor include ldap auth userfilter warning mdx Binding anonymous search discoverdn bool optional If true use anonymous bind to discover the bind DN of a user userdn string optional Base DN under which to perform user search Example ou Users dc example dc com userattr string optional Attribute on user attribute object matching the username passed when authenticating Examples sAMAccountName cn uid userfilter string optional Go template used to construct a ldap user search filter The template can access the following context variables UserAttr Username The default userfilter is or userPrincipalName UPNDomain if the upndomain parameter is set The user search filter can be used to restrict what user can attempt to log in For example to limit login to users that are not contractors you could write objectClass user employeeType Contractor deny null bind bool optional This option prevents users from bypassing authentication when providing an empty password The default is true anonymous group search bool optional Use anonymous binds when performing LDAP group searches Defaults to false include ldap auth userfilter warning mdx Alias dereferencing dereference aliases string optional Control how aliases are dereferenced when performing the search Possible values are never finding searching and always finding will only dereference aliases during name resolution of the base searching will dereference aliases after name resolution Binding user principal name AD upndomain string optional userPrincipalDomain used to construct the UPN string for the authenticating user The constructed UPN will appear as username UPNDomain Example example com which will cause vault to bind as username example com Group membership resolution Once a user has been authenticated the LDAP auth method must know how to resolve which groups the user is a member of The configuration for this can vary depending on your LDAP server and your directory schema There are two main strategies when resolving group membership the first is searching for the authenticated user object and following an attribute to groups it is a member of The second is to search for group objects of which the authenticated user is a member of Both methods are supported groupfilter string optional Go template used when constructing the group membership query The template can access the following context variables UserDN Username The default is memberUid member uniqueMember which is compatible with several common directory schemas To support nested group resolution for Active Directory instead use the following query objectClass group member 1 2 840 113556 1 4 1941 groupdn string required LDAP search base to use for group membership search This can be the root containing either groups or users Example ou Groups dc example dc com groupattr string optional LDAP attribute to follow on objects returned by groupfilter in order to enumerate user group membership Examples for groupfilter queries returning group objects use cn For queries returning user objects use memberOf The default is cn Note When using Authenticated Search for binding parameters see above the distinguished name defined for binddn is used for the group search Otherwise the authenticating user is used to perform the group search Use vault path help for more details Other username as alias bool optional If set to true forces the auth method to use the username passed by the user as the alias name max page size int optional If set to a value greater than 0 the LDAP backend will use the LDAP server s paged search control to request pages of up to the given size This can be used to avoid hitting the LDAP server s maximum result size limit Otherwise the LDAP backend will not use the paged search control Examples Scenario 1 LDAP server running on ldap example com port 389 Server supports STARTTLS command to initiate encryption on the standard port CA Certificate stored in file named ldap ca cert pem Server is Active Directory supporting the userPrincipalName attribute Users are identified as username example com Groups are nested we will use LDAP MATCHING RULE IN CHAIN to walk the ancestry graph Group search will start under ou Groups dc example dc com For all group objects under that path the member attribute will be checked for a match against the authenticated user Group names are identified using their cn attribute shell session vault write auth ldap config url ldap ldap example com userdn ou Users dc example dc com groupdn ou Groups dc example dc com groupfilter objectClass group member 1 2 840 113556 1 4 1941 groupattr cn upndomain example com certificate ldap ca cert pem insecure tls false starttls true Scenario 2 LDAP server running on ldap example com port 389 Server supports STARTTLS command to initiate encryption on the standard port CA Certificate stored in file named ldap ca cert pem Server does not allow anonymous binds for performing user search Bind account used for searching is cn vault ou users dc example dc com with password My ecrt3tP4ss User objects are under the ou Users dc example dc com organizational unit Username passed to vault when authenticating maps to the sAMAccountName attribute Group membership will be resolved via the memberOf attribute of user objects That search will begin under ou Users dc example dc com shell session vault write auth ldap config url ldap ldap example com userattr sAMAccountName userdn ou Users dc example dc com groupdn ou Users dc example dc com groupfilter objectClass person uid groupattr memberOf binddn cn vault ou users dc example dc com bindpass My ecrt3tP4ss certificate ldap ca cert pem insecure tls false starttls true Scenario 3 LDAP server running on ldap example com port 636 LDAPS CA Certificate stored in file named ldap ca cert pem User objects are under the ou Users dc example dc com organizational unit Username passed to vault when authenticating maps to the uid attribute User bind DN will be auto discovered using anonymous binding Group membership will be resolved via any one of memberUid member or uniqueMember attributes That search will begin under ou Groups dc example dc com Group names are identified using the cn attribute shell session vault write auth ldap config url ldaps ldap example com userattr uid userdn ou Users dc example dc com discoverdn true groupdn ou Groups dc example dc com certificate ldap ca cert pem insecure tls false starttls true LDAP group policy mapping Next we want to create a mapping from an LDAP group to a Vault policy shell session vault write auth ldap groups scientists policies foo bar This maps the LDAP group scientists to the foo and bar Vault policies We can also add specific LDAP users to additional potentially non LDAP groups Note that policies can also be specified on LDAP users as well shell session vault write auth ldap groups engineers policies foobar vault write auth ldap users tesla groups engineers policies zoobar This adds the LDAP user tesla to the engineers group which maps to the foobar Vault policy User tesla itself is associated with zoobar policy Finally we can test this by authenticating shell session vault login method ldap username tesla Password will be hidden Successfully authenticated The policies that are associated with this token are listed below default foobar zoobar Note on policy mapping It should be noted that user policy mapping happens at token creation time And changes in group membership on the LDAP server will not affect tokens that have already been provisioned To see these changes old tokens should be revoked and the user should be asked to reauthenticate User lockout include user lockout mdx API The LDAP auth method has a full HTTP API Please see the LDAP auth method API vault api docs auth ldap for more details |
vault Use JWT OIDC authentication with Vault to support OIDC and user provided JWTs page title Use JWT OIDC authentication layout docs include x509 sha1 deprecation mdx Use JWT OIDC authentication | ---
layout: docs
page_title: Use JWT/OIDC authentication
description: >-
Use JWT/OIDC authentication with Vault to support OIDC and user-provided JWTs.
---
# Use JWT/OIDC authentication
@include 'x509-sha1-deprecation.mdx'
~> **Note**: Starting in Vault 1.17, if the JWT in the authentication request
contains an `aud` claim, the associated `bound_audiences` for the "jwt" role
must match at least one of the `aud` claims declared for the JWT. For
additional details, refer to the [JWT auth method (API)](/vault/api-docs/auth/jwt)
documentation and [1.17 Upgrade Guide](/vault/docs/upgrading/upgrade-to-1.17.x#jwt-auth-login-requires-bound-audiences-on-the-role).
The `jwt` auth method can be used to authenticate with Vault using
[OIDC](https://en.wikipedia.org/wiki/OpenID_Connect) or by providing a
[JWT](https://en.wikipedia.org/wiki/JSON_Web_Token).
The OIDC method allows authentication via a configured OIDC provider using the
user's web browser. This method may be initiated from the Vault UI or the
command line. Alternatively, a JWT can be provided directly. The JWT is
cryptographically verified using locally-provided keys, or, if configured, an
OIDC Discovery service can be used to fetch the appropriate keys. The choice of
method is configured per role.
Both methods allow additional processing of the claims data in the JWT. Some of
the concepts common to both methods will be covered first, followed by specific
examples of OIDC and JWT usage.
## OIDC authentication
This section covers the setup and use of OIDC roles. If a JWT is to be provided directly,
refer to the [JWT Authentication](/vault/docs/auth/jwt#jwt-authentication) section below. Basic
familiarity with [OIDC concepts](https://developer.okta.com/blog/2017/07/25/oidc-primer-part-1)
is assumed. The Authorization Code flow makes use of the Proof Key for Code
Exchange (PKCE) extension.
Vault includes two built-in OIDC login flows: the Vault UI, and the CLI
using a `vault login`.
### Redirect URIs
An important part of OIDC role configuration is properly setting redirect URIs. This must be
done both in Vault and with the OIDC provider, and these configurations must align. The
redirect URIs are specified for a role with the `allowed_redirect_uris` parameter. There are
different redirect URIs to configure the Vault UI and CLI flows, so one or both will need to
be set up depending on the installation.
**CLI**
If you plan to support authentication via `vault login -method=oidc`, a localhost redirect URI
must be set. This can usually be: `http://localhost:8250/oidc/callback`. Logins via the CLI may
specify a different host and/or listening port if needed, and a URI with this host/port must match one
of the configured redirected URIs. These same "localhost" URIs must be added to the provider as well.
**Vault UI**
Logging in via the Vault UI requires a redirect URI of the form:
`https://{host:port}/ui/vault/auth/{path}/oidc/callback`
The "host:port" must be correct for the Vault server, and "path" must match the path the JWT
backend is mounted at (e.g. "oidc" or "jwt").
If the [oidc_response_mode](/vault/api-docs/auth/jwt#oidc_response_mode) is set to `form_post`, then
logging in via the Vault UI requires a redirect URI of the form:
`https://{host:port}/v1/auth/{path}/oidc/callback`
Prior to Vault 1.6, if [namespaces](/vault/docs/enterprise/namespaces) are in use,
they must be added as query parameters, for example:
`https://vault.example.com:8200/ui/vault/auth/oidc/oidc/callback?namespace=my_ns`
For Vault 1.6+, it is no longer necessary to add the namespace as a query
parameter in the redirect URI, if
[`namespace_in_state`](/vault/api-docs/auth/jwt#namespace_in_state) is set to `true`,
which is the default for new configs.
### OIDC login (Vault UI)
1. Select the "OIDC" login method.
1. Enter a role name if necessary.
1. Press "Sign In" and complete the authentication with the configured provider.
### OIDC login (CLI)
The CLI login defaults to path of `/oidc`. If this auth method was enabled at a
different path, specify `-path=/my-path` in the CLI.
```shell-session
$ vault login -method=oidc port=8400 role=test
Complete the login via your OIDC provider. Launching browser to:
https://myco.auth0.com/authorize?redirect_uri=http%3A%2F%2Flocalhost%3A8400%2Foidc%2Fcallback&client_id=r3qXc2bix9eF...
```
The browser will open to the generated URL to complete the provider's login. The
URL may be entered manually if the browser cannot be automatically opened.
- `skip_browser` (default: "false"). Toggle the automatic launching of the default browser to the login URL.
The callback listener may be customized with the following optional parameters. These are typically
not required to be set:
- `mount` (default: "oidc")
- `listenaddress` (default: "localhost")
- `port` (default: 8250)
- `callbackhost` (default: "localhost")
- `callbackmethod` (default: "http")
- `callbackport` (default: value set for `port`). This value is used in the `redirect_uri`, whereas
`port` is the localhost port that the listener is using. These two may be different in advanced setups.
### OIDC provider configuration
The OIDC authentication flow has been successfully tested with a number of providers. A full
guide to configuring OAuth/OIDC applications is beyond the scope of Vault documentation, but a
collection of provider configuration steps has been collected to help get started:
[OIDC Provider Setup](/vault/docs/auth/jwt/oidc-providers)
### OIDC configuration troubleshooting
This amount of configuration required for OIDC is relatively small, but it can be tricky to debug
why things aren't working. Some tips for setting up OIDC:
- If a role parameter (e.g. `bound_claims`) requires a map value, it can't be set individually using
the Vault CLI. In these cases the best approach is to write the entire configuration as a single
JSON object:
```text
vault write auth/oidc/role/demo -<<EOF
{
"user_claim": "sub",
"bound_audiences": "abc123",
"role_type": "oidc",
"policies": "demo",
"ttl": "1h",
"bound_claims": { "groups": ["mygroup/mysubgroup"] }
}
EOF
```
- Monitor Vault's log output. Important information about OIDC validation failures will be emitted.
- Ensure Redirect URIs are correct in Vault and on the provider. They need to match exactly. Check:
http/https, 127.0.0.1/localhost, port numbers, whether trailing slashes are present.
- Start simple. The only claim configuration a role requires is `user_claim`. After authentication is
known to work, you can add additional claims bindings and metadata copying.
- `bound_audiences` is optional for OIDC roles and typically not required. OIDC providers will use
the client_id as the audience and OIDC validation expects this.
- Check your provider for what scopes are required in order to receive all
of the information you need. The scopes "profile" and "groups" often need to be
requested, and can be added by setting `oidc_scopes="profile,groups"` on the role.
- If you're seeing claim-related errors in logs, review the provider's docs very carefully to see
how they're naming and structuring their claims. Depending on the provider, you may be able to
construct a simple `curl` implicit grant request to obtain a JWT that you can inspect. An example
of how to decode the JWT (in this case located in the "access_token" field of a JSON response):
`cat jwt.json | jq -r .access_token | cut -d. -f2 | base64 -D`
- As of Vault 1.2, the [`verbose_oidc_logging`](/vault/api-docs/auth/jwt#verbose_oidc_logging) role
option is available which will log the received OIDC token to the _server_ logs if debug-level logging is enabled. This can
be helpful when debugging provider setup and verifying that the received claims are what you expect.
Since claims data is logged verbatim and may contain sensitive information, this option should not be
used in production.
- Azure requires some additional configuration when a user is a member of more
than 200 groups, described in [Azure-specific handling
configuration](/vault/docs/auth/jwt/oidc-providers/azuread#optional-azure-specific-configuration)
## JWT authentication
The authentication flow for roles of type "jwt" is simpler than OIDC since Vault
only needs to validate the provided JWT.
### JWT verification
Vault verifies JWT signatures against public keys from the issuer. You can
only configure one JWT signature verification method per mounted backend from
the following options:
- **Static Keys**. A set of public keys is stored directly in the backend configuration. See the
[jwt_validation_pubkeys](/vault/api-docs/auth/jwt#jwt_validation_pubkeys)
configuration option.
- **JWKS**. A JSON Web Key Set ([JWKS](https://tools.ietf.org/html/rfc7517)) URL and optional
certificate chain is configured. Keys will be fetched from this endpoint for authentication.
See the [jwks_url](/vault/api-docs/auth/jwt#jwks_url) and [jwks_ca_pem](/vault/api-docs/auth/jwt#jwks_ca_pem)
configuration options.
- **JWKS Pairs**. A list of JSON Web Key Set ([JWKS](https://tools.ietf.org/html/rfc7517)) URLs and optional
certificate chain for each is configured. Keys will be fetched from each endpoint for authentication,
stopping at the first set to successfully verify the JWT signature. See the
[jwks_pairs](/vault/api-docs/auth/jwt#jwks_pairs) configuration option.
- **OIDC Discovery**. An OIDC Discovery URL and optional certificate chain is configured. Keys
will be fetched from this URL during authentication. When OIDC Discovery is used, OIDC validation
criteria (e.g. `iss`, `aud`, etc.) will be applied. See the [oidc_discovery_url](/vault/api-docs/auth/jwt#oidc_discovery_url)
and [oidc_discovery_ca_pem](/vault/api-docs/auth/jwt#oidc_discovery_ca_pem) configuration
options.
To configure additional verification methods, you must mount and configure one
backend instance per method at different paths.
After verifying the JWT signatures, Vault checks the corresponding `aud` claim.
If the JWT in the authentication request contains an `aud` claim, the
associated `bound_audiences` for the role must match at least one of the `aud`
claims declared for the JWT.
### Via the CLI
```shell-session
$ vault write auth/<path-to-jwt-backend>/login role=demo jwt=...
```
The default path for the JWT authentication backend is `/jwt`, so if you're using the default backend, the command would be:
```shell-session
$ vault write auth/jwt/login role=demo jwt=...
```
If your JWT auth backend is using a different path, use that path.
### Via the API
The default endpoint is `auth/jwt/login`. If this auth method was enabled
at a different path, use that value instead of `jwt`.
```shell-session
$ curl \
--request POST \
--data '{"jwt": "your_jwt", "role": "demo"}' \
http://127.0.0.1:8200/v1/auth/jwt/login
```
The response will contain a token at `auth.client_token`:
```json
{
"auth": {
"client_token": "38fe9691-e623-7238-f618-c94d4e7bc674",
"accessor": "78e87a38-84ed-2692-538f-ca8b9f400ab3",
"policies": ["default"],
"metadata": {
"role": "demo"
},
"lease_duration": 2764800,
"renewable": true
}
}
```
## Configuration
Auth methods must be configured in advance before users or machines can
authenticate. These steps are usually completed by an operator or configuration
management tool.
1. Enable the JWT auth method. Either the "jwt" or "oidc" name may be used. The
backend will be mounted at the chosen name.
```text
$ vault auth enable jwt
or
$ vault auth enable oidc
```
1. Use the `/config` endpoint to configure Vault. To support JWT roles, either local keys, JWKS URL(s), or an OIDC
Discovery URL must be present. For OIDC roles, OIDC Discovery URL, OIDC Client ID and OIDC Client Secret are required. For the
list of available configuration options, please see the [API documentation](/vault/api-docs/auth/jwt).
```text
$ vault write auth/jwt/config \
oidc_discovery_url="https://myco.auth0.com/" \
oidc_client_id="m5i8bj3iofytj" \
oidc_client_secret="f4ubv72nfiu23hnsj" \
default_role="demo"
```
If you need to perform JWT verification with JWT token validation, then leave the `oidc_client_id` and `oidc_client_secret` blank.
```text
$ vault write auth/jwt/config \
oidc_discovery_url="https://MYDOMAIN.eu.auth0.com/" \
oidc_client_id="" \
oidc_client_secret="" \
```
1. Create a named role:
```text
vault write auth/jwt/role/demo \
allowed_redirect_uris="http://localhost:8250/oidc/callback" \
bound_subject="r3qX9DljwFIWhsiqwFiu38209F10atW6@clients" \
bound_audiences="https://vault.plugin.auth.jwt.test" \
user_claim="https://vault/user" \
groups_claim="https://vault/groups" \
policies=webapps \
ttl=1h
```
This role authorizes JWTs with the given subject and audience claims, gives
it the `webapps` policy, and uses the given user/groups claims to set up
Identity aliases.
For the complete list of configuration options, please see the API
documentation.
### Bound claims
Once a JWT has been validated as being properly signed and not expired, the
authorization flow will validate that any configured "bound" parameters match.
In some cases there are dedicated parameters, for example `bound_subject`,
that must match the provided `sub` claim. For roles of type "jwt":
1. the `bound_audiences` parameter is required when an `aud` claim is set.
1. the `bound_audiences` parameter must match at least one of provided `aud` claims.
You can also configure roles to check an arbitrary set of claims and required
values with the `bound_claims` map. For example, assume `bound_claims` is set to:
```json
{
"division": "Europe",
"department": "Engineering"
}
```
Only JWTs containing both the "division" and "department" claims, and
respective matching values of "Europe" and "Engineering", would be authorized.
If the expected value is a list, the claim must match one of the items in the list.
To limit authorization to a set of email addresses:
```json
{
"email": ["[email protected]", "[email protected]"]
}
```
Bound claims can optionally be configured with globs. See the [API documentation](/vault/api-docs/auth/jwt#bound_claims_type) for more details.
### Claims as metadata
Data from claims can be copied into the resulting auth token and alias metadata by configuring `claim_mappings`. This role
parameter is a map of items to copy. The map elements are of the form: `"<JWT claim>":"<metadata key>"`. Assume
`claim_mappings` is set to:
```json
{
"division": "organization",
"department": "department"
}
```
This specifies that the value in the JWT claim "division" should be copied to the metadata key "organization". The JWT
"department" claim value will also be copied into metadata but will retain the key name. If a claim is configured in `claim_mappings`,
it must existing in the JWT or else the authentication will fail.
Note: the metadata key name "role" is reserved and may not be used for claim mappings. Since Vault 1.16 the role name is available
by the key `role` in the alias metadata of the entity after a successful login.
### Claim specifications and JSON pointer
Some parameters (e.g. `bound_claims`, `groups_claim`, `claim_mappings`, `user_claim`) are
used to point to data within the JWT. If the desired key is at the top of level of the JWT,
the name can be provided directly. If it is nested at a lower level, a JSON Pointer may be
used.
Assume the following JSON data to be referenced:
```json
{
"division": "North America",
"groups": {
"primary": "Engineering",
"secondary": "Software"
}
}
```
A parameter of `"division"` will reference "North America", as this is a top level key. A parameter
`"/groups/primary"` uses JSON Pointer syntax to reference "Engineering" at a lower level. Any valid
JSON Pointer can be used as a selector. Refer to the
[JSON Pointer RFC](https://tools.ietf.org/html/rfc6901) for a full description of the syntax.
## Tutorial
Refer to the following tutorials for OIDC auth method usage examples:
- [OIDC Auth Method](/vault/tutorials/auth-methods/oidc-auth)
- [Azure Active Directory with OIDC Auth Method and External
Groups](/vault/tutorials/auth-methods/oidc-auth-azure)
- [OIDC Authentication with Okta](/vault/tutorials/auth-methods/vault-oidc-okta)
- [OIDC Authentication with Google Workspace](/vault/tutorials/auth-methods/google-workspace-oauth)
## API
The JWT Auth Plugin has a full HTTP API. Please see the
[API docs](/vault/api-docs/auth/jwt) for more details. | vault | layout docs page title Use JWT OIDC authentication description Use JWT OIDC authentication with Vault to support OIDC and user provided JWTs Use JWT OIDC authentication include x509 sha1 deprecation mdx Note Starting in Vault 1 17 if the JWT in the authentication request contains an aud claim the associated bound audiences for the jwt role must match at least one of the aud claims declared for the JWT For additional details refer to the JWT auth method API vault api docs auth jwt documentation and 1 17 Upgrade Guide vault docs upgrading upgrade to 1 17 x jwt auth login requires bound audiences on the role The jwt auth method can be used to authenticate with Vault using OIDC https en wikipedia org wiki OpenID Connect or by providing a JWT https en wikipedia org wiki JSON Web Token The OIDC method allows authentication via a configured OIDC provider using the user s web browser This method may be initiated from the Vault UI or the command line Alternatively a JWT can be provided directly The JWT is cryptographically verified using locally provided keys or if configured an OIDC Discovery service can be used to fetch the appropriate keys The choice of method is configured per role Both methods allow additional processing of the claims data in the JWT Some of the concepts common to both methods will be covered first followed by specific examples of OIDC and JWT usage OIDC authentication This section covers the setup and use of OIDC roles If a JWT is to be provided directly refer to the JWT Authentication vault docs auth jwt jwt authentication section below Basic familiarity with OIDC concepts https developer okta com blog 2017 07 25 oidc primer part 1 is assumed The Authorization Code flow makes use of the Proof Key for Code Exchange PKCE extension Vault includes two built in OIDC login flows the Vault UI and the CLI using a vault login Redirect URIs An important part of OIDC role configuration is properly setting redirect URIs This must be done both in Vault and with the OIDC provider and these configurations must align The redirect URIs are specified for a role with the allowed redirect uris parameter There are different redirect URIs to configure the Vault UI and CLI flows so one or both will need to be set up depending on the installation CLI If you plan to support authentication via vault login method oidc a localhost redirect URI must be set This can usually be http localhost 8250 oidc callback Logins via the CLI may specify a different host and or listening port if needed and a URI with this host port must match one of the configured redirected URIs These same localhost URIs must be added to the provider as well Vault UI Logging in via the Vault UI requires a redirect URI of the form https host port ui vault auth path oidc callback The host port must be correct for the Vault server and path must match the path the JWT backend is mounted at e g oidc or jwt If the oidc response mode vault api docs auth jwt oidc response mode is set to form post then logging in via the Vault UI requires a redirect URI of the form https host port v1 auth path oidc callback Prior to Vault 1 6 if namespaces vault docs enterprise namespaces are in use they must be added as query parameters for example https vault example com 8200 ui vault auth oidc oidc callback namespace my ns For Vault 1 6 it is no longer necessary to add the namespace as a query parameter in the redirect URI if namespace in state vault api docs auth jwt namespace in state is set to true which is the default for new configs OIDC login Vault UI 1 Select the OIDC login method 1 Enter a role name if necessary 1 Press Sign In and complete the authentication with the configured provider OIDC login CLI The CLI login defaults to path of oidc If this auth method was enabled at a different path specify path my path in the CLI shell session vault login method oidc port 8400 role test Complete the login via your OIDC provider Launching browser to https myco auth0 com authorize redirect uri http 3A 2F 2Flocalhost 3A8400 2Foidc 2Fcallback client id r3qXc2bix9eF The browser will open to the generated URL to complete the provider s login The URL may be entered manually if the browser cannot be automatically opened skip browser default false Toggle the automatic launching of the default browser to the login URL The callback listener may be customized with the following optional parameters These are typically not required to be set mount default oidc listenaddress default localhost port default 8250 callbackhost default localhost callbackmethod default http callbackport default value set for port This value is used in the redirect uri whereas port is the localhost port that the listener is using These two may be different in advanced setups OIDC provider configuration The OIDC authentication flow has been successfully tested with a number of providers A full guide to configuring OAuth OIDC applications is beyond the scope of Vault documentation but a collection of provider configuration steps has been collected to help get started OIDC Provider Setup vault docs auth jwt oidc providers OIDC configuration troubleshooting This amount of configuration required for OIDC is relatively small but it can be tricky to debug why things aren t working Some tips for setting up OIDC If a role parameter e g bound claims requires a map value it can t be set individually using the Vault CLI In these cases the best approach is to write the entire configuration as a single JSON object text vault write auth oidc role demo EOF user claim sub bound audiences abc123 role type oidc policies demo ttl 1h bound claims groups mygroup mysubgroup EOF Monitor Vault s log output Important information about OIDC validation failures will be emitted Ensure Redirect URIs are correct in Vault and on the provider They need to match exactly Check http https 127 0 0 1 localhost port numbers whether trailing slashes are present Start simple The only claim configuration a role requires is user claim After authentication is known to work you can add additional claims bindings and metadata copying bound audiences is optional for OIDC roles and typically not required OIDC providers will use the client id as the audience and OIDC validation expects this Check your provider for what scopes are required in order to receive all of the information you need The scopes profile and groups often need to be requested and can be added by setting oidc scopes profile groups on the role If you re seeing claim related errors in logs review the provider s docs very carefully to see how they re naming and structuring their claims Depending on the provider you may be able to construct a simple curl implicit grant request to obtain a JWT that you can inspect An example of how to decode the JWT in this case located in the access token field of a JSON response cat jwt json jq r access token cut d f2 base64 D As of Vault 1 2 the verbose oidc logging vault api docs auth jwt verbose oidc logging role option is available which will log the received OIDC token to the server logs if debug level logging is enabled This can be helpful when debugging provider setup and verifying that the received claims are what you expect Since claims data is logged verbatim and may contain sensitive information this option should not be used in production Azure requires some additional configuration when a user is a member of more than 200 groups described in Azure specific handling configuration vault docs auth jwt oidc providers azuread optional azure specific configuration JWT authentication The authentication flow for roles of type jwt is simpler than OIDC since Vault only needs to validate the provided JWT JWT verification Vault verifies JWT signatures against public keys from the issuer You can only configure one JWT signature verification method per mounted backend from the following options Static Keys A set of public keys is stored directly in the backend configuration See the jwt validation pubkeys vault api docs auth jwt jwt validation pubkeys configuration option JWKS A JSON Web Key Set JWKS https tools ietf org html rfc7517 URL and optional certificate chain is configured Keys will be fetched from this endpoint for authentication See the jwks url vault api docs auth jwt jwks url and jwks ca pem vault api docs auth jwt jwks ca pem configuration options JWKS Pairs A list of JSON Web Key Set JWKS https tools ietf org html rfc7517 URLs and optional certificate chain for each is configured Keys will be fetched from each endpoint for authentication stopping at the first set to successfully verify the JWT signature See the jwks pairs vault api docs auth jwt jwks pairs configuration option OIDC Discovery An OIDC Discovery URL and optional certificate chain is configured Keys will be fetched from this URL during authentication When OIDC Discovery is used OIDC validation criteria e g iss aud etc will be applied See the oidc discovery url vault api docs auth jwt oidc discovery url and oidc discovery ca pem vault api docs auth jwt oidc discovery ca pem configuration options To configure additional verification methods you must mount and configure one backend instance per method at different paths After verifying the JWT signatures Vault checks the corresponding aud claim If the JWT in the authentication request contains an aud claim the associated bound audiences for the role must match at least one of the aud claims declared for the JWT Via the CLI shell session vault write auth path to jwt backend login role demo jwt The default path for the JWT authentication backend is jwt so if you re using the default backend the command would be shell session vault write auth jwt login role demo jwt If your JWT auth backend is using a different path use that path Via the API The default endpoint is auth jwt login If this auth method was enabled at a different path use that value instead of jwt shell session curl request POST data jwt your jwt role demo http 127 0 0 1 8200 v1 auth jwt login The response will contain a token at auth client token json auth client token 38fe9691 e623 7238 f618 c94d4e7bc674 accessor 78e87a38 84ed 2692 538f ca8b9f400ab3 policies default metadata role demo lease duration 2764800 renewable true Configuration Auth methods must be configured in advance before users or machines can authenticate These steps are usually completed by an operator or configuration management tool 1 Enable the JWT auth method Either the jwt or oidc name may be used The backend will be mounted at the chosen name text vault auth enable jwt or vault auth enable oidc 1 Use the config endpoint to configure Vault To support JWT roles either local keys JWKS URL s or an OIDC Discovery URL must be present For OIDC roles OIDC Discovery URL OIDC Client ID and OIDC Client Secret are required For the list of available configuration options please see the API documentation vault api docs auth jwt text vault write auth jwt config oidc discovery url https myco auth0 com oidc client id m5i8bj3iofytj oidc client secret f4ubv72nfiu23hnsj default role demo If you need to perform JWT verification with JWT token validation then leave the oidc client id and oidc client secret blank text vault write auth jwt config oidc discovery url https MYDOMAIN eu auth0 com oidc client id oidc client secret 1 Create a named role text vault write auth jwt role demo allowed redirect uris http localhost 8250 oidc callback bound subject r3qX9DljwFIWhsiqwFiu38209F10atW6 clients bound audiences https vault plugin auth jwt test user claim https vault user groups claim https vault groups policies webapps ttl 1h This role authorizes JWTs with the given subject and audience claims gives it the webapps policy and uses the given user groups claims to set up Identity aliases For the complete list of configuration options please see the API documentation Bound claims Once a JWT has been validated as being properly signed and not expired the authorization flow will validate that any configured bound parameters match In some cases there are dedicated parameters for example bound subject that must match the provided sub claim For roles of type jwt 1 the bound audiences parameter is required when an aud claim is set 1 the bound audiences parameter must match at least one of provided aud claims You can also configure roles to check an arbitrary set of claims and required values with the bound claims map For example assume bound claims is set to json division Europe department Engineering Only JWTs containing both the division and department claims and respective matching values of Europe and Engineering would be authorized If the expected value is a list the claim must match one of the items in the list To limit authorization to a set of email addresses json email fred example com julie example com Bound claims can optionally be configured with globs See the API documentation vault api docs auth jwt bound claims type for more details Claims as metadata Data from claims can be copied into the resulting auth token and alias metadata by configuring claim mappings This role parameter is a map of items to copy The map elements are of the form JWT claim metadata key Assume claim mappings is set to json division organization department department This specifies that the value in the JWT claim division should be copied to the metadata key organization The JWT department claim value will also be copied into metadata but will retain the key name If a claim is configured in claim mappings it must existing in the JWT or else the authentication will fail Note the metadata key name role is reserved and may not be used for claim mappings Since Vault 1 16 the role name is available by the key role in the alias metadata of the entity after a successful login Claim specifications and JSON pointer Some parameters e g bound claims groups claim claim mappings user claim are used to point to data within the JWT If the desired key is at the top of level of the JWT the name can be provided directly If it is nested at a lower level a JSON Pointer may be used Assume the following JSON data to be referenced json division North America groups primary Engineering secondary Software A parameter of division will reference North America as this is a top level key A parameter groups primary uses JSON Pointer syntax to reference Engineering at a lower level Any valid JSON Pointer can be used as a selector Refer to the JSON Pointer RFC https tools ietf org html rfc6901 for a full description of the syntax Tutorial Refer to the following tutorials for OIDC auth method usage examples OIDC Auth Method vault tutorials auth methods oidc auth Azure Active Directory with OIDC Auth Method and External Groups vault tutorials auth methods oidc auth azure OIDC Authentication with Okta vault tutorials auth methods vault oidc okta OIDC Authentication with Google Workspace vault tutorials auth methods google workspace oauth API The JWT Auth Plugin has a full HTTP API Please see the API docs vault api docs auth jwt for more details |
vault Configure Vault to use Active Directory Federation Services ADFS as an OIDC provider Use ADFS for OIDC authentication layout docs page title Use with ADFS for OIDC | ---
layout: docs
page_title: Use with ADFS for OIDC
description: >-
Configure Vault to use Active Directory Federation Services (ADFS)
as an OIDC provider.
---
# Use ADFS for OIDC authentication
Configure your Vault instance to work with Active Directory Federation Services
(ADFS) and use ADFS accounts with OIDC for Vault login.
## Before you start
1. **You must have Vault v1.15.0+**.
1. **You must be running ADFS on Windows Server**.
1. **You must have an OIDC client secret from your ADFS instance**.
1. **You must know your Vault admin token**. If you do not have a valid admin
token, you can generate a new token in the Vault UI or with the
[Vault CLI](/vault/docs/commands/token/create).
## Step 1: Enable the OIDC authN method for Vault
<Tabs>
<Tab heading="Vault CLI">
1. Save your Vault instance URL to the `VAULT_ADDR` environment variable:
```shell-session
$ export VAULT_ADDR="<URL_FOR_YOUR_VAULT_INSTALLATION>"
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ export VAULT_ADDR="https://myvault.example.com:8200"
```
</CodeBlockConfig>
1. Save your Vault instance URL to the `VAULT_TOKEN` environment variable:
```shell-session
$ export VAULT_TOKEN="<YOUR_VAULT_ACCESS_TOKEN>"
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ export VAULT_TOKEN="XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
```
</CodeBlockConfig>
1. **If you use Vault Enterprise or Vault HCP**, set the namespace where you
have the OIDC plugin mounted to the `VAULT_NAMESPACE` environment variable:
```shell-session
$ export VAULT_NAMESPACE="<OIDC_NAMESPACE>"
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ export VAULT_NAMESPACE="oidc-ns"
```
</CodeBlockConfig>
1. Enable the OIDC authentication plugin:
```shell-session
$ vault auth enable -path=<YOUR_OIDC_MOUNT_PATH> oidc
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault auth enable -path=/adfs oidc
```
</CodeBlockConfig>
</Tab>
<Tab heading="Vault UI">
1. Open the web UI for your Vault instance.
1. Select **Access** from the left-hand menu.
1. Right click **Enable new method** on the Access page.
1. Select **OIDC**.
1. Click **Next**.
1. Set the mount path for the OIDC plugin. For example, `adfs`.
1. Click **Enable Method**.
1. Click **Save** to enable the plugin.
</Tab>
</Tabs>
## Step 2: Create a new application group in ADFS
<Note title="Save the client ID">
Make note of the 32-character **client identifier** provided by ADFS for your
new application group (for example, `d879d6fb-d2de-4596-b39c-191b2f83c03f`).
You will need the client ID to configure your OIDC plugin for Vault.
</Note>
1. Open your Windows Server UI.
1. Go to the Server Manager screen and click **Tools**.
1. Select **AD FS Management**.
1. Right-click on **Application Groups** and select **Add Application Group...**.
1. Follow the prompts to create a new application group with the following
information:
- **Name**: Vault
- **Description**: a short description explaining the purpose of the application
group. For example, "Enable access to Vault".
- **Application type**: Server application
- **Redirect URI**: add the callback URL of your OIDC plugin for web
redirects and the local OIDC callback URL for Vault CLI redirects. For
example, `https://myvault.example.com:8200/ui/vault/auth/<YOUR_OIDC_MOUNT_PATH>/oidc/callback`
and `http://localhost:8250/oidc/callback`.
1. Check the **Generate a shared secret** box and save the secret string.
1. Confirm the application group details and correct information before closing.
## Step 3: Configure the webhook in ADFS
1. Open the Vault application group in from the ADFS management screen.
1. Click **Add application...**
1. Select **Web API**.
1. Follow the prompts to configure a new webhook with the following information:
- Identifier: the client ID of your application group
- Access control policy: select an existing policy or `Permit everyone`
- Enable `allatclaims`, `email`, `openid`, and `profile`
1. Select the new webhook (Vault - Web API) from the properties screen of the
Vault application group.
1. Open the **Issuance Transform Rules** tab.
1. Click **Add Rule...** and follow the prompts to create a new authentication
rule with the following information:
- Select **Send LDAP Attributes as Claims**
- Rule name: `LDAP Group`
- Attribute store: `Active Directory`
- LDAP attribute: `Token-Groups - Unqualified Names`
- Outgoing claim type: `Group`
[](/img/adfs-oidc-ldapgroupoption.png)
## Step 4: Create a default ADFS role in Vault
Use the `vault write` CLI command to create a default role for users
authenticating with ADFS where:
- `ADFS_APPLICATION_GROUP_CLIENT_ID` is the client ID provided by ADFS.
- `YOUR_OIDC_MOUNT_PATH` is the mount path for the OIDC plugin.. For example,
`adfs`.
- `ADFS_ROLE` is the name of your role. For example, `adfs-default`.
```shell-session
$ vault write auth/<YOUR_OIDC_MOUNT_PATH>/role/<ADFS_ROLE> \
bound_audiences="<ADFS_APPLICATION_GROUP_CLIENT_ID>" \
allowed_redirect_uris="${VAULT_ADDR}/ui/vault/auth/<YOUR_OIDC_MOUNT_PATH>/oidc/callback" \
allowed_redirect_uris="http://localhost:8250/oidc/callback" \
user_claim="upn" groups_claim="group" token_policies="default"
```
<Tip>
Using `upn` value for `user_claim` tells Vault to consider the user email
associated with the ADFS authentication token as an entity alias.
</Tip>
## Step 5: Configure the OIDC plugin
Use the client ID and shared secret for your ADFS application group to finish
configuring the OIDC plugin.
<Tabs>
<Tab heading="Vault CLI">
Use the `vault write` CLI command to save the configuration details for the OIDC
plugin where:
- `ADFS_URL` is the discovery URL for your ADFS instance. For example,
`https://adfs.example.com/adfs`
- `ADFS_APPLICATION_GROUP_CLIENT_ID` is the client ID provided by ADFS.
- `YOUR_OIDC_MOUNT_PATH` is the mount path for the OIDC plugin.. For example,
`adfs`.
- `ADFS_APPLICATION_GROUP_SECRET` is the shared secret for your ADFS application
group.
- `ADFS_ROLE` is the name of your role. For example, `adfs-default`.
```shell-session
$ vault write auth/<YOUR_OIDC_MOUNT_PATH>/config \
oidc_discovery_url="<ADFS_URL>" \
oidc_client_id="<ADFS_APPLICATION_GROUP_CLIENT_ID>" \
oidc_client_secret="<ADFS_APPLICATION_GROUP_SECRET>" \
default_role="<ADFS_ROLE>"
```
</Tab>
<Tab heading="Vault UI">
1. Open the Vault UI.
1. Select the OIDC plugin from the **Access** screen.
1. Click **Enable Method** and follow the prompts to configure the OIDC plugin
with the following information:
- OIDC discovery URL: the discovery URL for your ADFS instance. For example,
`https://adfs.example.com/adfs`.
- Default role: the name of your new ADFS role. For example, `adfs-default`.
1. Click **OIDC Options** and set your OIDC information:
- OIDC client ID: the application group client ID provided by ADFS.
- OIDC client secret: the shared secret for your ADFS application group.
1. Save your changes.
</Tab>
</Tabs>
## OPTIONAL: Link Active Directory groups to Vault
1. Enable the KV secret engine in Vault for ADFS:
```shell-session
$ vault secrets enable -path=<ADFS_KV_PLUGIN_PATH> kv-v2
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault secrets enable -path=adfs-kv kv-v2
```
</CodeBlockConfig>
1. Create a read-only policy against the KV plugin for ADFS:
```shell-session
$ vault policy write <RO_ADFS_POLICY_NAME> - << EOF
# Read and list policy for the ADFS KV mount
path "<ADFS_KV_PLUGIN_PATH>/*" {
capabilities = ["read", "list"]
}
EOF
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault policy write read-adfs-test - << EOF
# Read and list policy for the ADFS KV mount
path "adfs-kv/*" {
capabilities = ["read", "list"]
}
EOF
```
</CodeBlockConfig>
1. Write a test value to the KV plugin:
```shell-session
$ vault kv put <ADFS_KV_PLUGIN_PATH>/test test_key="test value"
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault kv put adfs-kv/test test_key="test value"
```
</CodeBlockConfig>
Now you can create a Vault group and link to an AD group:
<Tabs>
<Tab heading="Vault CLI">
1. Create an external group in Vault and save the group ID to a file named
`group_id.txt`:
```shell-session
$ vault write \
-format=json \
identity/group name="<YOUR_NEW_VAULT_GROUP_NAME>" \
policies="<RO_ADFS_POLICY_NAME>" \
type="external" | jq -r ".data.id" > group_id.txt
```
1. Retrieve the mount accessor for the ADFS authentication method and save it to
a file named `accessor_adfs.txt`:
```shell-session
$ vault auth list -format=json | \
jq -r '.["<YOUR_OIDC_MOUNT_PATH>/"].accessor' > \
accessor_adfs.txt
```
1. Create a group alias:
```shell-session
$ vault write identity/group-alias \
name="<YOUR_EXISTING_AD_GROUP>" \
mount_accessor=$(cat accessor_adfs.txt) \
canonical_id="$(cat group_id.txt)"
```
1. Login to Vault as an AD user who is a member of YOUR_EXISTING_AD_GROUP.
1. Read your test value from the KV plugin:
```shell-session
$ vault kv list -mount=secret <ADFS_KV_PLUGIN_PATH>/test
```
</Tab>
<Tab heading="Vault UI">
1. Open the Vault UI.
1. Select **Access**.
1. Select **Groups**.
1. Click **Create group**.
1. Follow the prompts to create an external group with the following
information:
- Name: your new Vault group name
- Type: `external`
- Policies: the read-only ADFS policy you created. For example,
`read-adfs-test`.
1. Click on **Add alias** and follow the prompts to map the Vault group name
to an existing group on your AD:
- Name: the name of an existing AD group (**must match exactly**).
- Auth Backend: `<YOUR_OIDC_MOUNT_PATH>/ (oidc)`
1. Login to Vault as an AD user who is a member of the aliased AD group.
1. Read your test value from the KV plugin.
</Tab>
</Tabs>
| vault | layout docs page title Use with ADFS for OIDC description Configure Vault to use Active Directory Federation Services ADFS as an OIDC provider Use ADFS for OIDC authentication Configure your Vault instance to work with Active Directory Federation Services ADFS and use ADFS accounts with OIDC for Vault login Before you start 1 You must have Vault v1 15 0 1 You must be running ADFS on Windows Server 1 You must have an OIDC client secret from your ADFS instance 1 You must know your Vault admin token If you do not have a valid admin token you can generate a new token in the Vault UI or with the Vault CLI vault docs commands token create Step 1 Enable the OIDC authN method for Vault Tabs Tab heading Vault CLI 1 Save your Vault instance URL to the VAULT ADDR environment variable shell session export VAULT ADDR URL FOR YOUR VAULT INSTALLATION For example CodeBlockConfig hideClipboard shell session export VAULT ADDR https myvault example com 8200 CodeBlockConfig 1 Save your Vault instance URL to the VAULT TOKEN environment variable shell session export VAULT TOKEN YOUR VAULT ACCESS TOKEN For example CodeBlockConfig hideClipboard shell session export VAULT TOKEN XXXXXXXX XXXX XXXX XXXX XXXXXXXXXXXX CodeBlockConfig 1 If you use Vault Enterprise or Vault HCP set the namespace where you have the OIDC plugin mounted to the VAULT NAMESPACE environment variable shell session export VAULT NAMESPACE OIDC NAMESPACE For example CodeBlockConfig hideClipboard shell session export VAULT NAMESPACE oidc ns CodeBlockConfig 1 Enable the OIDC authentication plugin shell session vault auth enable path YOUR OIDC MOUNT PATH oidc For example CodeBlockConfig hideClipboard shell session vault auth enable path adfs oidc CodeBlockConfig Tab Tab heading Vault UI 1 Open the web UI for your Vault instance 1 Select Access from the left hand menu 1 Right click Enable new method on the Access page 1 Select OIDC 1 Click Next 1 Set the mount path for the OIDC plugin For example adfs 1 Click Enable Method 1 Click Save to enable the plugin Tab Tabs Step 2 Create a new application group in ADFS Note title Save the client ID Make note of the 32 character client identifier provided by ADFS for your new application group for example d879d6fb d2de 4596 b39c 191b2f83c03f You will need the client ID to configure your OIDC plugin for Vault Note 1 Open your Windows Server UI 1 Go to the Server Manager screen and click Tools 1 Select AD FS Management 1 Right click on Application Groups and select Add Application Group 1 Follow the prompts to create a new application group with the following information Name Vault Description a short description explaining the purpose of the application group For example Enable access to Vault Application type Server application Redirect URI add the callback URL of your OIDC plugin for web redirects and the local OIDC callback URL for Vault CLI redirects For example https myvault example com 8200 ui vault auth YOUR OIDC MOUNT PATH oidc callback and http localhost 8250 oidc callback 1 Check the Generate a shared secret box and save the secret string 1 Confirm the application group details and correct information before closing Step 3 Configure the webhook in ADFS 1 Open the Vault application group in from the ADFS management screen 1 Click Add application 1 Select Web API 1 Follow the prompts to configure a new webhook with the following information Identifier the client ID of your application group Access control policy select an existing policy or Permit everyone Enable allatclaims email openid and profile 1 Select the new webhook Vault Web API from the properties screen of the Vault application group 1 Open the Issuance Transform Rules tab 1 Click Add Rule and follow the prompts to create a new authentication rule with the following information Select Send LDAP Attributes as Claims Rule name LDAP Group Attribute store Active Directory LDAP attribute Token Groups Unqualified Names Outgoing claim type Group Screenshot of the Transform Claim Rule Configuration img adfs oidc ldapgroupoption png img adfs oidc ldapgroupoption png Step 4 Create a default ADFS role in Vault Use the vault write CLI command to create a default role for users authenticating with ADFS where ADFS APPLICATION GROUP CLIENT ID is the client ID provided by ADFS YOUR OIDC MOUNT PATH is the mount path for the OIDC plugin For example adfs ADFS ROLE is the name of your role For example adfs default shell session vault write auth YOUR OIDC MOUNT PATH role ADFS ROLE bound audiences ADFS APPLICATION GROUP CLIENT ID allowed redirect uris VAULT ADDR ui vault auth YOUR OIDC MOUNT PATH oidc callback allowed redirect uris http localhost 8250 oidc callback user claim upn groups claim group token policies default Tip Using upn value for user claim tells Vault to consider the user email associated with the ADFS authentication token as an entity alias Tip Step 5 Configure the OIDC plugin Use the client ID and shared secret for your ADFS application group to finish configuring the OIDC plugin Tabs Tab heading Vault CLI Use the vault write CLI command to save the configuration details for the OIDC plugin where ADFS URL is the discovery URL for your ADFS instance For example https adfs example com adfs ADFS APPLICATION GROUP CLIENT ID is the client ID provided by ADFS YOUR OIDC MOUNT PATH is the mount path for the OIDC plugin For example adfs ADFS APPLICATION GROUP SECRET is the shared secret for your ADFS application group ADFS ROLE is the name of your role For example adfs default shell session vault write auth YOUR OIDC MOUNT PATH config oidc discovery url ADFS URL oidc client id ADFS APPLICATION GROUP CLIENT ID oidc client secret ADFS APPLICATION GROUP SECRET default role ADFS ROLE Tab Tab heading Vault UI 1 Open the Vault UI 1 Select the OIDC plugin from the Access screen 1 Click Enable Method and follow the prompts to configure the OIDC plugin with the following information OIDC discovery URL the discovery URL for your ADFS instance For example https adfs example com adfs Default role the name of your new ADFS role For example adfs default 1 Click OIDC Options and set your OIDC information OIDC client ID the application group client ID provided by ADFS OIDC client secret the shared secret for your ADFS application group 1 Save your changes Tab Tabs OPTIONAL Link Active Directory groups to Vault 1 Enable the KV secret engine in Vault for ADFS shell session vault secrets enable path ADFS KV PLUGIN PATH kv v2 For example CodeBlockConfig hideClipboard shell session vault secrets enable path adfs kv kv v2 CodeBlockConfig 1 Create a read only policy against the KV plugin for ADFS shell session vault policy write RO ADFS POLICY NAME EOF Read and list policy for the ADFS KV mount path ADFS KV PLUGIN PATH capabilities read list EOF For example CodeBlockConfig hideClipboard shell session vault policy write read adfs test EOF Read and list policy for the ADFS KV mount path adfs kv capabilities read list EOF CodeBlockConfig 1 Write a test value to the KV plugin shell session vault kv put ADFS KV PLUGIN PATH test test key test value For example CodeBlockConfig hideClipboard shell session vault kv put adfs kv test test key test value CodeBlockConfig Now you can create a Vault group and link to an AD group Tabs Tab heading Vault CLI 1 Create an external group in Vault and save the group ID to a file named group id txt shell session vault write format json identity group name YOUR NEW VAULT GROUP NAME policies RO ADFS POLICY NAME type external jq r data id group id txt 1 Retrieve the mount accessor for the ADFS authentication method and save it to a file named accessor adfs txt shell session vault auth list format json jq r YOUR OIDC MOUNT PATH accessor accessor adfs txt 1 Create a group alias shell session vault write identity group alias name YOUR EXISTING AD GROUP mount accessor cat accessor adfs txt canonical id cat group id txt 1 Login to Vault as an AD user who is a member of YOUR EXISTING AD GROUP 1 Read your test value from the KV plugin shell session vault kv list mount secret ADFS KV PLUGIN PATH test Tab Tab heading Vault UI 1 Open the Vault UI 1 Select Access 1 Select Groups 1 Click Create group 1 Follow the prompts to create an external group with the following information Name your new Vault group name Type external Policies the read only ADFS policy you created For example read adfs test 1 Click on Add alias and follow the prompts to map the Vault group name to an existing group on your AD Name the name of an existing AD group must match exactly Auth Backend YOUR OIDC MOUNT PATH oidc 1 Login to Vault as an AD user who is a member of the aliased AD group 1 Read your test value from the KV plugin Tab Tabs |
vault Use Kubernetes for OIDC authentication Configure Vault to use Kubernetes as an OIDC provider page title Use Kubernetes for OIDC authentication layout docs Kubernetes can function as an OIDC provider such that Vault can validate its | ---
layout: docs
page_title: Use Kubernetes for OIDC authentication
description: >-
Configure Vault to use Kubernetes as an OIDC provider.
---
# Use Kubernetes for OIDC authentication
Kubernetes can function as an OIDC provider such that Vault can validate its
service account tokens using JWT/OIDC auth.
-> **Note:** The JWT auth engine does **not** use Kubernetes' `TokenReview` API
during authentication, and instead uses public key cryptography to verify the
contents of JWTs. This means tokens that have been revoked by Kubernetes will
still be considered valid by Vault until their expiry time. To mitigate this
risk, use short TTLs for service account tokens or use
[Kubernetes auth](/vault/docs/auth/kubernetes) which _does_ use the `TokenReview` API.
## Use service account issuer discovery
When using service account issuer discovery, you only need to provide the JWT
auth mount with an OIDC discovery URL, and sometimes a TLS certificate authority
to trust. This makes it the most straightforward method to configure if your
Kubernetes cluster meets the requirements.
Kubernetes cluster requirements:
* [`ServiceAccountIssuerDiscovery`][k8s-sa-issuer-discovery] feature enabled.
* Present from 1.18, defaults to enabled from 1.20.
* kube-apiserver's `--service-account-issuer` flag is set to a URL that is
reachable from Vault. Public by default for most managed Kubernetes solutions.
* Must use short-lived service account tokens when logging in.
* Tokens mounted into pods default to short-lived from 1.21.
Configuration steps:
1. Ensure OIDC discovery URLs do not require authentication, as detailed
[here][k8s-sa-issuer-discovery]:
```bash
kubectl create clusterrolebinding oidc-reviewer \
--clusterrole=system:service-account-issuer-discovery \
--group=system:unauthenticated
```
1. Find the issuer URL of the cluster.
```bash
ISSUER="$(kubectl get --raw /.well-known/openid-configuration | jq -r '.issuer')"
```
1. Enable and configure JWT auth in Vault.
1. If Vault is running in Kubernetes:
```bash
kubectl exec vault-0 -- vault auth enable jwt
kubectl exec vault-0 -- vault write auth/jwt/config \
oidc_discovery_url=https://kubernetes.default.svc.cluster.local \
oidc_discovery_ca_pem=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
```
1. Alternatively, if Vault is _not_ running in Kubernetes:
-> **Note:** When Vault is outside the cluster, the `$ISSUER` endpoint below may
or may not be reachable. If not, you can configure JWT auth using
[`jwt_validation_pubkeys`](#using-jwt-validation-public-keys) instead.
```bash
vault auth enable jwt
vault write auth/jwt/config oidc_discovery_url="${ISSUER}"
```
1. Configure a role and log in as detailed [below](#creating-a-role-and-logging-in).
[k8s-sa-issuer-discovery]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery
## Use JWT validation public keys
This method can be useful if Kubernetes' API is not reachable from Vault or if
you would like a single JWT auth mount to service multiple Kubernetes clusters
by chaining their public signing keys.
<Note title="Rotation of the JWT Signing Key in Kubernetes">
Should the JWT Signing Key used by Kubernetes be rotated,
this process should be repeated with the new key.
</Note>
Kubernetes cluster requirements:
* [`ServiceAccountIssuerDiscovery`][k8s-sa-issuer-discovery] feature enabled.
* Present from 1.18, defaults to enabled from 1.20.
* This requirement can be avoided if you can access the Kubernetes master
nodes to read the public signing key directly from disk at
`/etc/kubernetes/pki/sa.pub`. In this case, you can skip the steps to
retrieve and then convert the key as it will already be in PEM format.
* Must use short-lived service account tokens when logging in.
* Tokens mounted into pods default to short-lived from 1.21.
Configuration steps:
1. Fetch the service account signing public key from your cluster's JWKS URI.
```bash
# Query the jwks_uri specified in /.well-known/openid-configuration
kubectl get --raw "$(kubectl get --raw /.well-known/openid-configuration | jq -r '.jwks_uri' | sed -r 's/.*\.[^/]+(.*)/\1/')"
```
1. Convert the keys from JWK format to PEM. You can use a CLI tool or an online
converter such as [this one][jwk-to-pem].
1. Configure the JWT auth mount with those public keys.
```bash
vault write auth/jwt/config \
jwt_validation_pubkeys="-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9...
-----END PUBLIC KEY-----","-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9...
-----END PUBLIC KEY-----"
```
1. Configure a role and log in as detailed [below](#creating-a-role-and-logging-in).
[jwk-to-pem]: https://8gwifi.org/jwkconvertfunctions.jsp
## Create a role and logging in
Once your JWT auth mount is configured, you're ready to configure a role and
log in. The following assumes you use the projected service account token
available in all pods by default. See [Specifying TTL and audience](#specifying-ttl-and-audience)
below if you'd like to control the audience or TTL.
1. Choose any value from the array of default audiences. In these examples,
there is only one audience in the `aud` array,
`https://kubernetes.default.svc.cluster.local`.
To find the default audiences, either create a fresh token (requires
`kubectl` v1.24.0+):
```shell-session
$ kubectl create token default | cut -f2 -d. | base64 --decode
{"aud":["https://kubernetes.default.svc.cluster.local"], ... "sub":"system:serviceaccount:default:default"}
```
Or read a token from a running pod's filesystem:
```shell-session
$ kubectl exec my-pod -- cat /var/run/secrets/kubernetes.io/serviceaccount/token | cut -f2 -d. | base64 --decode
{"aud":["https://kubernetes.default.svc.cluster.local"], ... "sub":"system:serviceaccount:default:default"}
```
1. Create a role for JWT auth that the `default` service account from the
`default` namespace can use.
```bash
vault write auth/jwt/role/my-role \
role_type="jwt" \
bound_audiences="<AUDIENCE-FROM-PREVIOUS-STEP>" \
user_claim="sub" \
bound_subject="system:serviceaccount:default:default" \
policies="default" \
ttl="1h"
```
1. Pods or other clients with access to a service account JWT can then log in.
```bash
vault write auth/jwt/login \
role=my-role \
jwt=@/var/run/secrets/kubernetes.io/serviceaccount/token
# OR equivalent to:
curl \
--fail \
--request POST \
--header "X-Vault-Request: true" \
--data '{"jwt":"<JWT-TOKEN-HERE>","role":"my-role"}' \
"${VAULT_ADDR}/v1/auth/jwt/login"
```
## Specify TTL and audience
If you would like to specify a custom TTL or audience for service account tokens,
the following pod spec illustrates a volume mount that overrides the default
admission injected token. This is especially relevant if you are unable to
disable the [--service-account-extend-token-expiration][k8s-extended-tokens]
flag for `kube-apiserver` and want to use short TTLs.
When using the resulting token, you will need to set `bound_audiences=vault`
when creating roles in Vault's JWT auth mount.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
# automountServiceAccountToken is redundant in this example because the
# mountPath used overlaps with the default path. The overlap stops the default
# admission injected token from being created. You can use this option to
# ensure only a single token is mounted if you choose a different mount path.
automountServiceAccountToken: false
containers:
- name: nginx
image: nginx
volumeMounts:
- name: custom-token
mountPath: /var/run/secrets/kubernetes.io/serviceaccount
volumes:
- name: custom-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
path: token
expirationSeconds: 600 # 10 minutes is the minimum TTL
audience: vault # Must match your JWT role's `bound_audiences`
# The remaining sources are included to mimic the rest of the default
# admission injected volume.
- configMap:
name: kube-root-ca.crt
items:
- key: ca.crt
path: ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
```
[k8s-extended-tokens]: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/#options | vault | layout docs page title Use Kubernetes for OIDC authentication description Configure Vault to use Kubernetes as an OIDC provider Use Kubernetes for OIDC authentication Kubernetes can function as an OIDC provider such that Vault can validate its service account tokens using JWT OIDC auth Note The JWT auth engine does not use Kubernetes TokenReview API during authentication and instead uses public key cryptography to verify the contents of JWTs This means tokens that have been revoked by Kubernetes will still be considered valid by Vault until their expiry time To mitigate this risk use short TTLs for service account tokens or use Kubernetes auth vault docs auth kubernetes which does use the TokenReview API Use service account issuer discovery When using service account issuer discovery you only need to provide the JWT auth mount with an OIDC discovery URL and sometimes a TLS certificate authority to trust This makes it the most straightforward method to configure if your Kubernetes cluster meets the requirements Kubernetes cluster requirements ServiceAccountIssuerDiscovery k8s sa issuer discovery feature enabled Present from 1 18 defaults to enabled from 1 20 kube apiserver s service account issuer flag is set to a URL that is reachable from Vault Public by default for most managed Kubernetes solutions Must use short lived service account tokens when logging in Tokens mounted into pods default to short lived from 1 21 Configuration steps 1 Ensure OIDC discovery URLs do not require authentication as detailed here k8s sa issuer discovery bash kubectl create clusterrolebinding oidc reviewer clusterrole system service account issuer discovery group system unauthenticated 1 Find the issuer URL of the cluster bash ISSUER kubectl get raw well known openid configuration jq r issuer 1 Enable and configure JWT auth in Vault 1 If Vault is running in Kubernetes bash kubectl exec vault 0 vault auth enable jwt kubectl exec vault 0 vault write auth jwt config oidc discovery url https kubernetes default svc cluster local oidc discovery ca pem var run secrets kubernetes io serviceaccount ca crt 1 Alternatively if Vault is not running in Kubernetes Note When Vault is outside the cluster the ISSUER endpoint below may or may not be reachable If not you can configure JWT auth using jwt validation pubkeys using jwt validation public keys instead bash vault auth enable jwt vault write auth jwt config oidc discovery url ISSUER 1 Configure a role and log in as detailed below creating a role and logging in k8s sa issuer discovery https kubernetes io docs tasks configure pod container configure service account service account issuer discovery Use JWT validation public keys This method can be useful if Kubernetes API is not reachable from Vault or if you would like a single JWT auth mount to service multiple Kubernetes clusters by chaining their public signing keys Note title Rotation of the JWT Signing Key in Kubernetes Should the JWT Signing Key used by Kubernetes be rotated this process should be repeated with the new key Note Kubernetes cluster requirements ServiceAccountIssuerDiscovery k8s sa issuer discovery feature enabled Present from 1 18 defaults to enabled from 1 20 This requirement can be avoided if you can access the Kubernetes master nodes to read the public signing key directly from disk at etc kubernetes pki sa pub In this case you can skip the steps to retrieve and then convert the key as it will already be in PEM format Must use short lived service account tokens when logging in Tokens mounted into pods default to short lived from 1 21 Configuration steps 1 Fetch the service account signing public key from your cluster s JWKS URI bash Query the jwks uri specified in well known openid configuration kubectl get raw kubectl get raw well known openid configuration jq r jwks uri sed r s 1 1 Convert the keys from JWK format to PEM You can use a CLI tool or an online converter such as this one jwk to pem 1 Configure the JWT auth mount with those public keys bash vault write auth jwt config jwt validation pubkeys BEGIN PUBLIC KEY MIIBIjANBgkqhkiG9 END PUBLIC KEY BEGIN PUBLIC KEY MIIBIjANBgkqhkiG9 END PUBLIC KEY 1 Configure a role and log in as detailed below creating a role and logging in jwk to pem https 8gwifi org jwkconvertfunctions jsp Create a role and logging in Once your JWT auth mount is configured you re ready to configure a role and log in The following assumes you use the projected service account token available in all pods by default See Specifying TTL and audience specifying ttl and audience below if you d like to control the audience or TTL 1 Choose any value from the array of default audiences In these examples there is only one audience in the aud array https kubernetes default svc cluster local To find the default audiences either create a fresh token requires kubectl v1 24 0 shell session kubectl create token default cut f2 d base64 decode aud https kubernetes default svc cluster local sub system serviceaccount default default Or read a token from a running pod s filesystem shell session kubectl exec my pod cat var run secrets kubernetes io serviceaccount token cut f2 d base64 decode aud https kubernetes default svc cluster local sub system serviceaccount default default 1 Create a role for JWT auth that the default service account from the default namespace can use bash vault write auth jwt role my role role type jwt bound audiences AUDIENCE FROM PREVIOUS STEP user claim sub bound subject system serviceaccount default default policies default ttl 1h 1 Pods or other clients with access to a service account JWT can then log in bash vault write auth jwt login role my role jwt var run secrets kubernetes io serviceaccount token OR equivalent to curl fail request POST header X Vault Request true data jwt JWT TOKEN HERE role my role VAULT ADDR v1 auth jwt login Specify TTL and audience If you would like to specify a custom TTL or audience for service account tokens the following pod spec illustrates a volume mount that overrides the default admission injected token This is especially relevant if you are unable to disable the service account extend token expiration k8s extended tokens flag for kube apiserver and want to use short TTLs When using the resulting token you will need to set bound audiences vault when creating roles in Vault s JWT auth mount yaml apiVersion v1 kind Pod metadata name nginx spec automountServiceAccountToken is redundant in this example because the mountPath used overlaps with the default path The overlap stops the default admission injected token from being created You can use this option to ensure only a single token is mounted if you choose a different mount path automountServiceAccountToken false containers name nginx image nginx volumeMounts name custom token mountPath var run secrets kubernetes io serviceaccount volumes name custom token projected defaultMode 420 sources serviceAccountToken path token expirationSeconds 600 10 minutes is the minimum TTL audience vault Must match your JWT role s bound audiences The remaining sources are included to mimic the rest of the default admission injected volume configMap name kube root ca crt items key ca crt path ca crt downwardAPI items fieldRef apiVersion v1 fieldPath metadata namespace path namespace k8s extended tokens https kubernetes io docs reference command line tools reference kube apiserver options |
vault page title Use Google for OIDC Use Google for OIDC authentication Main reference Using OAuth 2 0 to Access Google APIs https developers google com identity protocols OAuth2 Configure Vault to use Google as an OIDC provider layout docs | ---
layout: docs
page_title: Use Google for OIDC
description: >-
Configure Vault to use Google as an OIDC provider.
---
# Use Google for OIDC authentication
Main reference: [Using OAuth 2.0 to Access Google APIs](https://developers.google.com/identity/protocols/OAuth2)
1. Visit the [Google API Console](https://console.developers.google.com).
1. Create or a select a project.
1. Navigate to Menu > APIs & Services
1. Create a new credential via Credentials > Create Credentials > OAuth Client ID.
1. Configure the OAuth Consent Screen. Application Name is required. Save.
1. Select application type: "Web Application".
1. Configure Authorized [Redirect URIs](/vault/docs/auth/jwt#redirect-uris).
1. Save client ID and secret.
### Optional google-specific configuration
Google-specific configuration is available when using Google as an identity provider from the
Vault JWT/OIDC auth method. The configuration allows Vault to obtain Google Workspace group membership and
user information during the JWT/OIDC authentication flow. The group membership obtained from Google Workspace
may be used for Identity group alias association. The user information obtained from Google Workspace can be
used to copy claims data into resulting auth token and alias metadata via [claim_mappings](/vault/api-docs/auth/jwt#claim_mappings).
#### Setup
To set up the Google-specific handling, you'll need:
- A Google Workspace account with the [super admin role](https://support.google.com/a/answer/2405986?hl=en)
for granting domain-wide delegation API client access, or a service account that has been granted
[the necessary](https://cloud.google.com/identity/docs/how-to/setup#auth-no-dwd) group admin roles.
- The ability to create a service account in [Google Cloud Platform](https://console.developers.google.com/iam-admin/serviceaccounts).
- To enable the [Admin SDK API](https://console.developers.google.com/apis/api/admin.googleapis.com/overview).
- An OAuth 2.0 application with an [internal user type](https://support.google.com/cloud/answer/10311615#user-type).
We **do not** recommend using an external user type since it would allow _any user_ with a
Google account to authenticate with Vault.
The Google-specific handling that's used to fetch Google Workspace groups and user information in Vault uses either
Google Workspace Domain-Wide Delegation of Authority for authentication and authorization, or group admin roles granted to a GCP service account.
Links to steps for setting up authentication and authorization:
- [DWDoA](https://developers.google.com/workspace/guides/create-credentials#service-account)
- [Without DWDoA](https://cloud.google.com/identity/docs/how-to/setup#auth-no-dwd)
In **step 11** within the section titled
[Optional: Set up domain-wide delegation for a service account](https://developers.google.com/workspace/guides/create-credentials#optional_set_up_domain-wide_delegation_for_a_service_account),
the only OAuth scopes that should be granted are:
- `https://www.googleapis.com/auth/admin.directory.group.readonly`
- `https://www.googleapis.com/auth/admin.directory.user.readonly`
~> This is an **important security step** in order to give the service account the least set of privileges
that enable the feature.
#### Configuration
- `provider` `(string: <required>)` - Name of the provider. Must be set to "gsuite".
- `gsuite_service_account` `(string: <optional>)` - Either the path to or the contents of a Google service
account key file in JSON format. If given as a file path, it must refer to a file that's readable on
the host that Vault is running on. If given directly as JSON contents, the JSON must be properly escaped.
If left empty, Application Default Credentials will be used.
- `gsuite_admin_impersonate` `(string: <optional>)` - Email address of a Google Workspace admin to impersonate.
- `fetch_groups` `(bool: false)` - If set to true, groups will be fetched from Google Workspace.
- `fetch_user_info` `(bool: false)` - If set to true, user info will be fetched from Google Workspace using the configured [user_custom_schemas](#user_custom_schemas).
- `groups_recurse_max_depth` `(int: <optional>)` - Group membership recursion max depth. Defaults to 0, which means don't recurse.
- `user_custom_schemas` `(string: <optional>)` - Comma-separated list of Google Workspace [custom schemas](https://developers.google.com/admin-sdk/directory/v1/guides/manage-schemas).
Values set for Google Workspace users using custom schema fields will be fetched and made available as claims that can be used with [claim_mappings](/vault/api-docs/auth/jwt#claim_mappings). Required if [fetch_user_info](#fetch_user_info) is set to true.
- `impersonate_principal` `(string: <optional>)` - Service account email that has been granted domain-wide delegation of authority in Google Workspace.
Required if accessing the Google Workspace Directory API through domain-wide delegation of authority, without using a service account key.
The service account vault is running under must be granted the `iam.serviceAccounts.signJwt` permission on this service account.
If `gsuite_admin_impersonate` is specified, that Workspace user will be impersonated.
- `domain` `(string: <optional>)` - The domain to get groups from. Set this if your workspace is configured with more than one domain.
Example configuration:
```
vault write auth/oidc/config -<<EOF
{
"oidc_discovery_url": "https://accounts.google.com",
"oidc_client_id": "your_client_id",
"oidc_client_secret": "your_client_secret",
"default_role": "your_default_role",
"provider_config": {
"provider": "gsuite",
"gsuite_service_account": "/path/to/service-account.json",
"gsuite_admin_impersonate": "[email protected]",
"fetch_groups": true,
"fetch_user_info": true,
"groups_recurse_max_depth": 5,
"user_custom_schemas": "Education,Preferences",
"impersonate_principal": "[email protected]"
}
}
EOF
```
#### Role
The [user_claim](/vault/api-docs/auth/jwt#user_claim) value of the role must be set to
one of either `sub` or `email` for the Google Workspace group and user information
queries to succeed.
Example role:
```
vault write auth/oidc/role/your_default_role \
allowed_redirect_uris="http://localhost:8200/ui/vault/auth/oidc/oidc/callback,http://localhost:8250/oidc/callback" \
user_claim="sub" \
groups_claim="groups" \
claim_mappings="/Education/graduation_date"="graduation_date" \
claim_mappings="/Preferences/shirt_size"="shirt_size"
``` | vault | layout docs page title Use Google for OIDC description Configure Vault to use Google as an OIDC provider Use Google for OIDC authentication Main reference Using OAuth 2 0 to Access Google APIs https developers google com identity protocols OAuth2 1 Visit the Google API Console https console developers google com 1 Create or a select a project 1 Navigate to Menu APIs Services 1 Create a new credential via Credentials Create Credentials OAuth Client ID 1 Configure the OAuth Consent Screen Application Name is required Save 1 Select application type Web Application 1 Configure Authorized Redirect URIs vault docs auth jwt redirect uris 1 Save client ID and secret Optional google specific configuration Google specific configuration is available when using Google as an identity provider from the Vault JWT OIDC auth method The configuration allows Vault to obtain Google Workspace group membership and user information during the JWT OIDC authentication flow The group membership obtained from Google Workspace may be used for Identity group alias association The user information obtained from Google Workspace can be used to copy claims data into resulting auth token and alias metadata via claim mappings vault api docs auth jwt claim mappings Setup To set up the Google specific handling you ll need A Google Workspace account with the super admin role https support google com a answer 2405986 hl en for granting domain wide delegation API client access or a service account that has been granted the necessary https cloud google com identity docs how to setup auth no dwd group admin roles The ability to create a service account in Google Cloud Platform https console developers google com iam admin serviceaccounts To enable the Admin SDK API https console developers google com apis api admin googleapis com overview An OAuth 2 0 application with an internal user type https support google com cloud answer 10311615 user type We do not recommend using an external user type since it would allow any user with a Google account to authenticate with Vault The Google specific handling that s used to fetch Google Workspace groups and user information in Vault uses either Google Workspace Domain Wide Delegation of Authority for authentication and authorization or group admin roles granted to a GCP service account Links to steps for setting up authentication and authorization DWDoA https developers google com workspace guides create credentials service account Without DWDoA https cloud google com identity docs how to setup auth no dwd In step 11 within the section titled Optional Set up domain wide delegation for a service account https developers google com workspace guides create credentials optional set up domain wide delegation for a service account the only OAuth scopes that should be granted are https www googleapis com auth admin directory group readonly https www googleapis com auth admin directory user readonly This is an important security step in order to give the service account the least set of privileges that enable the feature Configuration provider string required Name of the provider Must be set to gsuite gsuite service account string optional Either the path to or the contents of a Google service account key file in JSON format If given as a file path it must refer to a file that s readable on the host that Vault is running on If given directly as JSON contents the JSON must be properly escaped If left empty Application Default Credentials will be used gsuite admin impersonate string optional Email address of a Google Workspace admin to impersonate fetch groups bool false If set to true groups will be fetched from Google Workspace fetch user info bool false If set to true user info will be fetched from Google Workspace using the configured user custom schemas user custom schemas groups recurse max depth int optional Group membership recursion max depth Defaults to 0 which means don t recurse user custom schemas string optional Comma separated list of Google Workspace custom schemas https developers google com admin sdk directory v1 guides manage schemas Values set for Google Workspace users using custom schema fields will be fetched and made available as claims that can be used with claim mappings vault api docs auth jwt claim mappings Required if fetch user info fetch user info is set to true impersonate principal string optional Service account email that has been granted domain wide delegation of authority in Google Workspace Required if accessing the Google Workspace Directory API through domain wide delegation of authority without using a service account key The service account vault is running under must be granted the iam serviceAccounts signJwt permission on this service account If gsuite admin impersonate is specified that Workspace user will be impersonated domain string optional The domain to get groups from Set this if your workspace is configured with more than one domain Example configuration vault write auth oidc config EOF oidc discovery url https accounts google com oidc client id your client id oidc client secret your client secret default role your default role provider config provider gsuite gsuite service account path to service account json gsuite admin impersonate admin gsuitedomain com fetch groups true fetch user info true groups recurse max depth 5 user custom schemas Education Preferences impersonate principal sa project iam gserviceaccount com EOF Role The user claim vault api docs auth jwt user claim value of the role must be set to one of either sub or email for the Google Workspace group and user information queries to succeed Example role vault write auth oidc role your default role allowed redirect uris http localhost 8200 ui vault auth oidc oidc callback http localhost 8250 oidc callback user claim sub groups claim groups claim mappings Education graduation date graduation date claim mappings Preferences shirt size shirt size |
vault Configure Vault to use Azure Active Directory AD as an OIDC provider page title Use Azure AD for OIDC Use Azure AD for OIDC authentication layout docs Note Azure Active Directory Applications that have custom signing keys as a result of using | ---
layout: docs
page_title: Use Azure AD for OIDC
description: >-
Configure Vault to use Azure Active Directory (AD) as an OIDC provider.
---
# Use Azure AD for OIDC authentication
~> **Note:** Azure Active Directory Applications that have custom signing keys as a result of using
the [claims-mapping](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-claims-mapping)
feature are currently not supported for OIDC authentication.
Reference: [Azure Active Directory v2.0 and the OpenID Connect protocol](https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-protocols-oidc)
1. Choose your Azure tenant.
1. Go to **Azure Active Directory** and
[register an application](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app)
for Vault.
1. Add Redirect URIs with the "Web" type. You may include two redirect URIs,
one for CLI access another one for Vault UI access.
- `http://localhost:8250/oidc/callback`
- `https://hostname:port_number/ui/vault/auth/oidc/oidc/callback`
1. Record the "Application (client) ID" as you will need it as the `oidc_client_id`.
1. Under **Endpoints**, copy the OpenID Connect metadata document URL, omitting the `/well-known...` portion.
- The endpoint URL (`oidc_discovery_url`) will look like: https://login.microsoftonline.com/tenant-guid-dead-beef-aaaa-aaaa/v2.0
1. Under **Certificates & secrets**,
[add a client secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#add-a-client-secret)
Record the secret's value as you will need it as the `oidc_client_secret` for Vault.
### Connect AD group with Vault external group
Reference: [Azure Active Directory with OIDC Auth Method and External Groups](/vault/tutorials/auth-methods/oidc-auth-azure)
To connect the AD group with a [Vault external groups](/vault/docs/secrets/identity#external-vs-internal-groups),
you will need
[Azure AD v2.0 endpoints](https://docs.microsoft.com/en-gb/azure/active-directory/develop/azure-ad-endpoint-comparison).
You should set up a [Vault policy](/vault/tutorials/policies/policies) for the Azure AD group to use.
1. Go to **Azure Active Directory** and choose your Vault application.
1. Go to **Token configuration** and **Add groups claim**. Select "All" or "SecurityGroup" based on
[which groups for a user](https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-fed-group-claims)
you want returned in the claim.
1. In Vault, enable the OIDC auth method.
1. Configure the OIDC auth method with the `oidc_client_id` (application ID), `oidc_client_secret`
(client secret), and `oidc_discovery_url` (endpoint URL) you recorded from Azure.
```shell
vault write auth/oidc/config \
oidc_client_id="your_client_id" \
oidc_client_secret="your_client_secret" \
default_role="your_default_role" \
oidc_discovery_url="https://login.microsoftonline.com/tenant_id/v2.0"
```
1. Configure the [OIDC Role](/vault/api-docs/auth/jwt#create-role) with the following:
- `user_claim` should be `"sub"` or `"oid"` following the
[recommendation](https://learn.microsoft.com/en-us/azure/active-directory/develop/id-token-claims-reference#use-claims-to-reliably-identify-a-user)
from Azure.
- `allowed_redirect_uris` should be the two redirect URIs for Vault CLI and UI access.
- `groups_claim` should be set to `"groups"`.
- `oidc_scopes` should be set to `"https://graph.microsoft.com/.default profile"`.
```shell
vault write auth/oidc/role/your_default_role \
user_claim="sub" \
allowed_redirect_uris="http://localhost:8250/oidc/callback,https://online_version_hostname:port_number/ui/vault/auth/oidc/oidc/callback" \
groups_claim="groups" \
oidc_scopes="https://graph.microsoft.com/.default profile" \
policies=default
```
1. In Vault, create the [external group](/vault/api-docs/secret/identity/group).
Record the group ID as you will need it for the group alias.
1. From Vault, retrieve the [OIDC accessor ID](/vault/api-docs/system/auth#list-auth-methods)
from the OIDC auth method as you will need it for the group alias's `mount_accessor`.
1. Go to the Azure AD Group you want to attach to Vault's external group. Record the `objectId`
as you will need it as the group alias name in Vault.
1. In Vault, create a [group alias](/vault/api-docs/secret/identity/group-alias)
for the external group and set the `objectId` as the group alias name.
```shell
vault write identity/group-alias \
name="your_ad_group_object_id" \
mount_accessor="vault_oidc_accessor_id" \
canonical_id="vault_external_group_id"
```
### Optional azure-specific configuration
If a user is a member of more than 200 groups (directly or indirectly), Azure will
send `_claim_names` and `_claim_sources`. For example, returned claims might look like:
```json
{
"_claim_names": {
"groups": "src1"
},
"_claim_sources": {
"src1": {
"endpoint": "https://graph.windows.net...."
}
}
}
```
The OIDC auth method role can be configured to include the user ID in the endpoint URL,
which will be used by Vault to retrieve the groups for the user. Additional API permissions
must be added to the Azure app in order to request the additional groups from the Microsoft
Graph API.
To set the proper permissions on the Azure app:
1. Locate the application under "App Registrations" in Azure
1. Navigate to the "API Permissions" page for the application
1. Add a permission
1. Select "Microsoft Graph"
1. Select "Delegated permissions"
1. Add the [User.Read](https://learn.microsoft.com/en-us/graph/permissions-reference#delegated-permissions-93) permission
1. Check the "Grant admin consent for Default Directory" checkbox
1. Configure the OIDC auth method in Vault by setting `"provider_config"` to Azure.
```shell
vault write auth/oidc/config -<<"EOH"
{
"oidc_client_id": "your_client_id",
"oidc_client_secret": "your_client_secret",
"default_role": "your_default_role",
"oidc_discovery_url": "https://login.microsoftonline.com/tenant_id/v2.0",
"provider_config": {
"provider": "azure"
}
}
EOH
```
1. Add `"profile"` to `oidc_scopes` so the user's ID comes back on the JWT.
```shell
vault write auth/oidc/role/your_default_role \
user_claim="sub" \
allowed_redirect_uris="http://localhost:8250/oidc/callback,https://online_version_hostname:port_number/ui/vault/auth/oidc/oidc/callback" \
groups_claim="groups" \
oidc_scopes="profile" \
policies="default"
``` | vault | layout docs page title Use Azure AD for OIDC description Configure Vault to use Azure Active Directory AD as an OIDC provider Use Azure AD for OIDC authentication Note Azure Active Directory Applications that have custom signing keys as a result of using the claims mapping https docs microsoft com en us azure active directory develop active directory claims mapping feature are currently not supported for OIDC authentication Reference Azure Active Directory v2 0 and the OpenID Connect protocol https docs microsoft com en us azure active directory develop v2 protocols oidc 1 Choose your Azure tenant 1 Go to Azure Active Directory and register an application https docs microsoft com en us azure active directory develop quickstart register app for Vault 1 Add Redirect URIs with the Web type You may include two redirect URIs one for CLI access another one for Vault UI access http localhost 8250 oidc callback https hostname port number ui vault auth oidc oidc callback 1 Record the Application client ID as you will need it as the oidc client id 1 Under Endpoints copy the OpenID Connect metadata document URL omitting the well known portion The endpoint URL oidc discovery url will look like https login microsoftonline com tenant guid dead beef aaaa aaaa v2 0 1 Under Certificates secrets add a client secret https docs microsoft com en us azure active directory develop quickstart register app add a client secret Record the secret s value as you will need it as the oidc client secret for Vault Connect AD group with Vault external group Reference Azure Active Directory with OIDC Auth Method and External Groups vault tutorials auth methods oidc auth azure To connect the AD group with a Vault external groups vault docs secrets identity external vs internal groups you will need Azure AD v2 0 endpoints https docs microsoft com en gb azure active directory develop azure ad endpoint comparison You should set up a Vault policy vault tutorials policies policies for the Azure AD group to use 1 Go to Azure Active Directory and choose your Vault application 1 Go to Token configuration and Add groups claim Select All or SecurityGroup based on which groups for a user https docs microsoft com en us azure active directory hybrid how to connect fed group claims you want returned in the claim 1 In Vault enable the OIDC auth method 1 Configure the OIDC auth method with the oidc client id application ID oidc client secret client secret and oidc discovery url endpoint URL you recorded from Azure shell vault write auth oidc config oidc client id your client id oidc client secret your client secret default role your default role oidc discovery url https login microsoftonline com tenant id v2 0 1 Configure the OIDC Role vault api docs auth jwt create role with the following user claim should be sub or oid following the recommendation https learn microsoft com en us azure active directory develop id token claims reference use claims to reliably identify a user from Azure allowed redirect uris should be the two redirect URIs for Vault CLI and UI access groups claim should be set to groups oidc scopes should be set to https graph microsoft com default profile shell vault write auth oidc role your default role user claim sub allowed redirect uris http localhost 8250 oidc callback https online version hostname port number ui vault auth oidc oidc callback groups claim groups oidc scopes https graph microsoft com default profile policies default 1 In Vault create the external group vault api docs secret identity group Record the group ID as you will need it for the group alias 1 From Vault retrieve the OIDC accessor ID vault api docs system auth list auth methods from the OIDC auth method as you will need it for the group alias s mount accessor 1 Go to the Azure AD Group you want to attach to Vault s external group Record the objectId as you will need it as the group alias name in Vault 1 In Vault create a group alias vault api docs secret identity group alias for the external group and set the objectId as the group alias name shell vault write identity group alias name your ad group object id mount accessor vault oidc accessor id canonical id vault external group id Optional azure specific configuration If a user is a member of more than 200 groups directly or indirectly Azure will send claim names and claim sources For example returned claims might look like json claim names groups src1 claim sources src1 endpoint https graph windows net The OIDC auth method role can be configured to include the user ID in the endpoint URL which will be used by Vault to retrieve the groups for the user Additional API permissions must be added to the Azure app in order to request the additional groups from the Microsoft Graph API To set the proper permissions on the Azure app 1 Locate the application under App Registrations in Azure 1 Navigate to the API Permissions page for the application 1 Add a permission 1 Select Microsoft Graph 1 Select Delegated permissions 1 Add the User Read https learn microsoft com en us graph permissions reference delegated permissions 93 permission 1 Check the Grant admin consent for Default Directory checkbox 1 Configure the OIDC auth method in Vault by setting provider config to Azure shell vault write auth oidc config EOH oidc client id your client id oidc client secret your client secret default role your default role oidc discovery url https login microsoftonline com tenant id v2 0 provider config provider azure EOH 1 Add profile to oidc scopes so the user s ID comes back on the JWT shell vault write auth oidc role your default role user claim sub allowed redirect uris http localhost 8250 oidc callback https online version hostname port number ui vault auth oidc oidc callback groups claim groups oidc scopes profile policies default |
vault authenticate to Vault layout docs Use AppRole authentication with Vault to control how machines and services page title Use AppRole authentication Use AppRole authentication | ---
layout: docs
page_title: Use AppRole authentication
description: >-
Use AppRole authentication with Vault to control how machines and services
authenticate to Vault.
---
# Use AppRole authentication
The `approle` auth method allows machines or _apps_ to authenticate with
Vault-defined _roles_. The open design of `AppRole` enables a varied set of
workflows and configurations to handle large numbers of apps. This auth method
is oriented to automated workflows (machines and services), and is less useful
for human operators. We recommend using `batch` tokens with the
`AppRole` auth method.
An "AppRole" represents a set of Vault policies and login constraints that must
be met to receive a token with those policies. The scope can be as narrow or
broad as desired. An AppRole can be created for a particular machine, or even
a particular user on that machine, or a service spread across machines. The
credentials required for successful login depend upon the constraints set on
the AppRole associated with the credentials.
## Authentication
### Via the CLI
The default path is `/approle`. If this auth method was enabled at a different
path, specify `auth/my-path/login` instead.
```shell-session
$ vault write auth/approle/login \
role_id=db02de05-fa39-4855-059b-67221c5c2f63 \
secret_id=6a174c20-f6de-a53c-74d2-6018fcceff64
Key Value
--- -----
token 65b74ffd-842c-fd43-1386-f7d7006e520a
token_accessor 3c29bc22-5c72-11a6-f778-2bc8f48cea0e
token_duration 20m0s
token_renewable true
token_policies [default]
```
### Via the API
The default endpoint is `auth/approle/login`. If this auth method was enabled
at a different path, use that value instead of `approle`.
```shell-session
$ curl \
--request POST \
--data '{"role_id":"988a9df-...","secret_id":"37b74931..."}' \
http://127.0.0.1:8200/v1/auth/approle/login
```
The response will contain the token at `auth.client_token`:
```json
{
"auth": {
"renewable": true,
"lease_duration": 2764800,
"metadata": {},
"policies": ["default", "dev-policy", "test-policy"],
"accessor": "5d7fb475-07cb-4060-c2de-1ca3fcbf0c56",
"client_token": "98a4c7ab-b1fe-361b-ba0b-e307aacfd587"
}
}
```
-> **Application Integration:** See the [Code Example](#code-example) section
for a code snippet demonstrating the authentication with Vault using the
AppRole auth method.
## Configuration
Auth methods must be configured in advance before users or machines can
authenticate. These steps are usually completed by an operator or configuration
management tool.
### Via the CLI
1. Enable the AppRole auth method:
```shell-session
$ vault auth enable approle
```
1. Create a named role:
```shell-session
$ vault write auth/approle/role/my-role \
token_type=batch \
secret_id_ttl=10m \
token_ttl=20m \
token_max_ttl=30m \
secret_id_num_uses=40
```
~> **Note:** If the token issued by your approle needs the ability to create child tokens, you will need to set token_num_uses to 0.
For the complete list of configuration options, please see the API
documentation.
1. Fetch the RoleID of the AppRole:
```shell-session
$ vault read auth/approle/role/my-role/role-id
role_id db02de05-fa39-4855-059b-67221c5c2f63
```
1. Get a SecretID issued against the AppRole:
```shell-session
$ vault write -f auth/approle/role/my-role/secret-id
secret_id 6a174c20-f6de-a53c-74d2-6018fcceff64
secret_id_accessor c454f7e5-996e-7230-6074-6ef26b7bcf86
secret_id_ttl 10m
secret_id_num_uses 40
```
### Via the API
1. Enable the AppRole auth method:
```shell-session
$ curl \
--header "X-Vault-Token: ..." \
--request POST \
--data '{"type": "approle"}' \
http://127.0.0.1:8200/v1/sys/auth/approle
```
1. Create an AppRole with desired set of policies:
```shell-session
$ curl \
--header "X-Vault-Token: ..." \
--request POST \
--data '{"policies": "dev-policy,test-policy", "token_type": "batch"}' \
http://127.0.0.1:8200/v1/auth/approle/role/my-role
```
1. Fetch the identifier of the role:
```shell-session
$ curl \
--header "X-Vault-Token: ..." \
http://127.0.0.1:8200/v1/auth/approle/role/my-role/role-id
```
The response will look like:
```json
{
"data": {
"role_id": "988a9dfd-ea69-4a53-6cb6-9d6b86474bba"
}
}
```
1. Create a new secret identifier under the role:
```shell-session
$ curl \
--header "X-Vault-Token: ..." \
--request POST \
http://127.0.0.1:8200/v1/auth/approle/role/my-role/secret-id
```
The response will look like:
```json
{
"data": {
"secret_id_accessor": "45946873-1d96-a9d4-678c-9229f74386a5",
"secret_id": "37b74931-c4cd-d49a-9246-ccc62d682a25",
"secret_id_ttl": 600,
"secret_id_num_uses": 40
}
}
```
## Credentials/Constraints
### RoleID
RoleID is an identifier that selects the AppRole against which the other
credentials are evaluated. When authenticating against this auth method's login
endpoint, the RoleID is a required argument (via `role_id`) at all times. By
default, RoleIDs are unique UUIDs, which allow them to serve as secondary
secrets to the other credential information. However, they can be set to
particular values to match introspected information by the client (for
instance, the client's domain name).
### SecretID
SecretID is a credential that is required by default for any login (via
`secret_id`) and is intended to always be secret. (For advanced usage,
requiring a SecretID can be disabled via an AppRole's `bind_secret_id`
parameter, allowing machines with only knowledge of the RoleID, or matching
other set constraints, to fetch a token). SecretIDs can be created against an
AppRole either via generation of a 128-bit purely random UUID by the role
itself (`Pull` mode) or via specific, custom values (`Push` mode). Similarly to
tokens, SecretIDs have properties like usage-limit, TTLs and expirations.
#### Pull and push SecretID modes
If the SecretID used for login is fetched from an AppRole, this is operating in
Pull mode. If a "custom" SecretID is set against an AppRole by the client, it
is referred to as a Push mode. Push mode mimics the behavior of the deprecated
App-ID auth method; however, in most cases Pull mode is the better approach. The
reason is that Push mode requires some other system to have knowledge of the
full set of client credentials (RoleID and SecretID) in order to create the
entry, even if these are then distributed via different paths. However, in Pull
mode, even though the RoleID must be known in order to distribute it to the
client, the SecretID can be kept confidential from all parties except for the
final authenticating client by using [Response Wrapping](/vault/docs/concepts/response-wrapping).
Push mode is available for App-ID workflow compatibility, which in some
specific cases is preferable, but in most cases Pull mode is more secure and
should be preferred.
### Further constraints
`role_id` is a required credential at the login endpoint. AppRole pointed to by
the `role_id` will have constraints set on it. This dictates other `required`
credentials for login. The `bind_secret_id` constraint requires `secret_id` to
be presented at the login endpoint. Going forward, this auth method can support
more constraint parameters to support varied set of Apps. Some constraints will
not require a credential, but still enforce constraints for login. For
example, `secret_id_bound_cidrs` will only allow logins coming from IP addresses
belonging to configured CIDR blocks on the AppRole.
## Tutorial
Refer to the following tutorials to learn more:
- [AppRole Pull Authentication](/vault/tutorials/auth-methods/approle) tutorial
to learn how to use the AppRole auth method to generate tokens for machines or
apps.
- [AppRole usage best
practices](/vault/tutorials/auth-methods/approle-best-practices) to understand
the recommendation for distributing the AppRole credentials to the target
Vault clients.
## User lockout
@include 'user-lockout.mdx'
## API
The AppRole auth method has a full HTTP API. Please see the
[AppRole API](/vault/api-docs/auth/approle) for more
details.
## Code example
The following example demonstrates AppRole authentication with response
wrapping.
<CodeTabs>
<CodeBlockConfig>
```go
package main
import (
"context"
"fmt"
"os"
vault "github.com/hashicorp/vault/api"
auth "github.com/hashicorp/vault/api/auth/approle"
)
// Fetches a key-value secret (kv-v2) after authenticating via AppRole.
func getSecretWithAppRole() (string, error) {
config := vault.DefaultConfig() // modify for more granular configuration
client, err := vault.NewClient(config)
if err != nil {
return "", fmt.Errorf("unable to initialize Vault client: %w", err)
}
// A combination of a Role ID and Secret ID is required to log in to Vault
// with an AppRole.
// First, let's get the role ID given to us by our Vault administrator.
roleID := os.Getenv("APPROLE_ROLE_ID")
if roleID == "" {
return "", fmt.Errorf("no role ID was provided in APPROLE_ROLE_ID env var")
}
// The Secret ID is a value that needs to be protected, so instead of the
// app having knowledge of the secret ID directly, we have a trusted orchestrator (https://learn.hashicorp.com/tutorials/vault/secure-introduction?in=vault/app-integration#trusted-orchestrator)
// give the app access to a short-lived response-wrapping token (https://developer.hashicorp.com/vault/docs/concepts/response-wrapping).
// Read more at: https://learn.hashicorp.com/tutorials/vault/approle-best-practices?in=vault/auth-methods#secretid-delivery-best-practices
secretID := &auth.SecretID{FromFile: "path/to/wrapping-token"}
appRoleAuth, err := auth.NewAppRoleAuth(
roleID,
secretID,
auth.WithWrappingToken(), // Only required if the secret ID is response-wrapped.
)
if err != nil {
return "", fmt.Errorf("unable to initialize AppRole auth method: %w", err)
}
authInfo, err := client.Auth().Login(context.Background(), appRoleAuth)
if err != nil {
return "", fmt.Errorf("unable to login to AppRole auth method: %w", err)
}
if authInfo == nil {
return "", fmt.Errorf("no auth info was returned after login")
}
// get secret from the default mount path for KV v2 in dev mode, "secret"
secret, err := client.KVv2("secret").Get(context.Background(), "creds")
if err != nil {
return "", fmt.Errorf("unable to read secret: %w", err)
}
// data map can contain more than one key-value pair,
// in this case we're just grabbing one of them
value, ok := secret.Data["password"].(string)
if !ok {
return "", fmt.Errorf("value type assertion failed: %T %#v", secret.Data["password"], secret.Data["password"])
}
return value, nil
}
```
</CodeBlockConfig>
<CodeBlockConfig>
```cs
using System;
using System.Collections.Generic;
using System.IO;
using VaultSharp;
using VaultSharp.V1.AuthMethods;
using VaultSharp.V1.AuthMethods.AppRole;
using VaultSharp.V1.AuthMethods.Token;
using VaultSharp.V1.Commons;
namespace Examples
{
public class ApproleAuthExample
{
const string DefaultTokenPath = "../../../path/to/wrapping-token";
/// <summary>
/// Fetches a key-value secret (kv-v2) after authenticating to Vault via AppRole authentication
/// </summary>
public string GetSecretWithAppRole()
{
// A combination of a Role ID and Secret ID is required to log in to Vault with an AppRole.
// The Secret ID is a value that needs to be protected, so instead of the app having knowledge of the secret ID directly,
// we have a trusted orchestrator (https://developer.hashicorp.com/vault/tutorials/app-integration/secure-introduction?in=vault%2Fapp-integration#trusted-orchestrator)
// give the app access to a short-lived response-wrapping token (https://developer.hashicorp.com/vault/docs/concepts/response-wrapping).
// Read more at: https://learn.hashicorp.com/tutorials/vault/approle-best-practices?in=vault/auth-methods#secretid-delivery-best-practices
var vaultAddr = Environment.GetEnvironmentVariable("VAULT_ADDR");
if(String.IsNullOrEmpty(vaultAddr))
{
throw new System.ArgumentNullException("Vault Address");
}
var roleId = Environment.GetEnvironmentVariable("APPROLE_ROLE_ID");
if(String.IsNullOrEmpty(roleId))
{
throw new System.ArgumentNullException("AppRole Role Id");
}
// Get the path to wrapping token or fall back on default path
string pathToToken = !String.IsNullOrEmpty(Environment.GetEnvironmentVariable("WRAPPING_TOKEN_PATH")) ? Environment.GetEnvironmentVariable("WRAPPING_TOKEN_PATH") : DefaultTokenPath;
string wrappingToken = File.ReadAllText(pathToToken); // placed here by a trusted orchestrator
// We need to create two VaultClient objects for authenticating via AppRole. The first is for
// using the unwrap utility. We need to initialize the client with the wrapping token.
IAuthMethodInfo wrappedTokenAuthMethod = new TokenAuthMethodInfo(wrappingToken);
var vaultClientSettingsForUnwrapping = new VaultClientSettings(vaultAddr, wrappedTokenAuthMethod);
IVaultClient vaultClientForUnwrapping = new VaultClient(vaultClientSettingsForUnwrapping);
// We pass null here instead of the wrapping token to avoid depleting its single usage
// given that we already initialized our client with the wrapping token
Secret<Dictionary<string, object>> secretIdData = vaultClientForUnwrapping.V1.System
.UnwrapWrappedResponseDataAsync<Dictionary<string, object>>(null).Result;
var secretId = secretIdData.Data["secret_id"]; // Grab the secret_id
// We create a second VaultClient and initialize it with the AppRole auth method and our new credentials.
IAuthMethodInfo authMethod = new AppRoleAuthMethodInfo(roleId, secretId.ToString());
var vaultClientSettings = new VaultClientSettings(vaultAddr, authMethod);
IVaultClient vaultClient = new VaultClient(vaultClientSettings);
// We can retrieve the secret from VaultClient
Secret<SecretData> kv2Secret = null;
kv2Secret = vaultClient.V1.Secrets.KeyValue.V2.ReadSecretAsync(path: "/creds").Result;
var password = kv2Secret.Data.Data["password"];
return password.ToString();
}
}
}
```
</CodeBlockConfig>
</CodeTabs> | vault | layout docs page title Use AppRole authentication description Use AppRole authentication with Vault to control how machines and services authenticate to Vault Use AppRole authentication The approle auth method allows machines or apps to authenticate with Vault defined roles The open design of AppRole enables a varied set of workflows and configurations to handle large numbers of apps This auth method is oriented to automated workflows machines and services and is less useful for human operators We recommend using batch tokens with the AppRole auth method An AppRole represents a set of Vault policies and login constraints that must be met to receive a token with those policies The scope can be as narrow or broad as desired An AppRole can be created for a particular machine or even a particular user on that machine or a service spread across machines The credentials required for successful login depend upon the constraints set on the AppRole associated with the credentials Authentication Via the CLI The default path is approle If this auth method was enabled at a different path specify auth my path login instead shell session vault write auth approle login role id db02de05 fa39 4855 059b 67221c5c2f63 secret id 6a174c20 f6de a53c 74d2 6018fcceff64 Key Value token 65b74ffd 842c fd43 1386 f7d7006e520a token accessor 3c29bc22 5c72 11a6 f778 2bc8f48cea0e token duration 20m0s token renewable true token policies default Via the API The default endpoint is auth approle login If this auth method was enabled at a different path use that value instead of approle shell session curl request POST data role id 988a9df secret id 37b74931 http 127 0 0 1 8200 v1 auth approle login The response will contain the token at auth client token json auth renewable true lease duration 2764800 metadata policies default dev policy test policy accessor 5d7fb475 07cb 4060 c2de 1ca3fcbf0c56 client token 98a4c7ab b1fe 361b ba0b e307aacfd587 Application Integration See the Code Example code example section for a code snippet demonstrating the authentication with Vault using the AppRole auth method Configuration Auth methods must be configured in advance before users or machines can authenticate These steps are usually completed by an operator or configuration management tool Via the CLI 1 Enable the AppRole auth method shell session vault auth enable approle 1 Create a named role shell session vault write auth approle role my role token type batch secret id ttl 10m token ttl 20m token max ttl 30m secret id num uses 40 Note If the token issued by your approle needs the ability to create child tokens you will need to set token num uses to 0 For the complete list of configuration options please see the API documentation 1 Fetch the RoleID of the AppRole shell session vault read auth approle role my role role id role id db02de05 fa39 4855 059b 67221c5c2f63 1 Get a SecretID issued against the AppRole shell session vault write f auth approle role my role secret id secret id 6a174c20 f6de a53c 74d2 6018fcceff64 secret id accessor c454f7e5 996e 7230 6074 6ef26b7bcf86 secret id ttl 10m secret id num uses 40 Via the API 1 Enable the AppRole auth method shell session curl header X Vault Token request POST data type approle http 127 0 0 1 8200 v1 sys auth approle 1 Create an AppRole with desired set of policies shell session curl header X Vault Token request POST data policies dev policy test policy token type batch http 127 0 0 1 8200 v1 auth approle role my role 1 Fetch the identifier of the role shell session curl header X Vault Token http 127 0 0 1 8200 v1 auth approle role my role role id The response will look like json data role id 988a9dfd ea69 4a53 6cb6 9d6b86474bba 1 Create a new secret identifier under the role shell session curl header X Vault Token request POST http 127 0 0 1 8200 v1 auth approle role my role secret id The response will look like json data secret id accessor 45946873 1d96 a9d4 678c 9229f74386a5 secret id 37b74931 c4cd d49a 9246 ccc62d682a25 secret id ttl 600 secret id num uses 40 Credentials Constraints RoleID RoleID is an identifier that selects the AppRole against which the other credentials are evaluated When authenticating against this auth method s login endpoint the RoleID is a required argument via role id at all times By default RoleIDs are unique UUIDs which allow them to serve as secondary secrets to the other credential information However they can be set to particular values to match introspected information by the client for instance the client s domain name SecretID SecretID is a credential that is required by default for any login via secret id and is intended to always be secret For advanced usage requiring a SecretID can be disabled via an AppRole s bind secret id parameter allowing machines with only knowledge of the RoleID or matching other set constraints to fetch a token SecretIDs can be created against an AppRole either via generation of a 128 bit purely random UUID by the role itself Pull mode or via specific custom values Push mode Similarly to tokens SecretIDs have properties like usage limit TTLs and expirations Pull and push SecretID modes If the SecretID used for login is fetched from an AppRole this is operating in Pull mode If a custom SecretID is set against an AppRole by the client it is referred to as a Push mode Push mode mimics the behavior of the deprecated App ID auth method however in most cases Pull mode is the better approach The reason is that Push mode requires some other system to have knowledge of the full set of client credentials RoleID and SecretID in order to create the entry even if these are then distributed via different paths However in Pull mode even though the RoleID must be known in order to distribute it to the client the SecretID can be kept confidential from all parties except for the final authenticating client by using Response Wrapping vault docs concepts response wrapping Push mode is available for App ID workflow compatibility which in some specific cases is preferable but in most cases Pull mode is more secure and should be preferred Further constraints role id is a required credential at the login endpoint AppRole pointed to by the role id will have constraints set on it This dictates other required credentials for login The bind secret id constraint requires secret id to be presented at the login endpoint Going forward this auth method can support more constraint parameters to support varied set of Apps Some constraints will not require a credential but still enforce constraints for login For example secret id bound cidrs will only allow logins coming from IP addresses belonging to configured CIDR blocks on the AppRole Tutorial Refer to the following tutorials to learn more AppRole Pull Authentication vault tutorials auth methods approle tutorial to learn how to use the AppRole auth method to generate tokens for machines or apps AppRole usage best practices vault tutorials auth methods approle best practices to understand the recommendation for distributing the AppRole credentials to the target Vault clients User lockout include user lockout mdx API The AppRole auth method has a full HTTP API Please see the AppRole API vault api docs auth approle for more details Code example The following example demonstrates AppRole authentication with response wrapping CodeTabs CodeBlockConfig go package main import context fmt os vault github com hashicorp vault api auth github com hashicorp vault api auth approle Fetches a key value secret kv v2 after authenticating via AppRole func getSecretWithAppRole string error config vault DefaultConfig modify for more granular configuration client err vault NewClient config if err nil return fmt Errorf unable to initialize Vault client w err A combination of a Role ID and Secret ID is required to log in to Vault with an AppRole First let s get the role ID given to us by our Vault administrator roleID os Getenv APPROLE ROLE ID if roleID return fmt Errorf no role ID was provided in APPROLE ROLE ID env var The Secret ID is a value that needs to be protected so instead of the app having knowledge of the secret ID directly we have a trusted orchestrator https learn hashicorp com tutorials vault secure introduction in vault app integration trusted orchestrator give the app access to a short lived response wrapping token https developer hashicorp com vault docs concepts response wrapping Read more at https learn hashicorp com tutorials vault approle best practices in vault auth methods secretid delivery best practices secretID auth SecretID FromFile path to wrapping token appRoleAuth err auth NewAppRoleAuth roleID secretID auth WithWrappingToken Only required if the secret ID is response wrapped if err nil return fmt Errorf unable to initialize AppRole auth method w err authInfo err client Auth Login context Background appRoleAuth if err nil return fmt Errorf unable to login to AppRole auth method w err if authInfo nil return fmt Errorf no auth info was returned after login get secret from the default mount path for KV v2 in dev mode secret secret err client KVv2 secret Get context Background creds if err nil return fmt Errorf unable to read secret w err data map can contain more than one key value pair in this case we re just grabbing one of them value ok secret Data password string if ok return fmt Errorf value type assertion failed T v secret Data password secret Data password return value nil CodeBlockConfig CodeBlockConfig cs using System using System Collections Generic using System IO using VaultSharp using VaultSharp V1 AuthMethods using VaultSharp V1 AuthMethods AppRole using VaultSharp V1 AuthMethods Token using VaultSharp V1 Commons namespace Examples public class ApproleAuthExample const string DefaultTokenPath path to wrapping token summary Fetches a key value secret kv v2 after authenticating to Vault via AppRole authentication summary public string GetSecretWithAppRole A combination of a Role ID and Secret ID is required to log in to Vault with an AppRole The Secret ID is a value that needs to be protected so instead of the app having knowledge of the secret ID directly we have a trusted orchestrator https developer hashicorp com vault tutorials app integration secure introduction in vault 2Fapp integration trusted orchestrator give the app access to a short lived response wrapping token https developer hashicorp com vault docs concepts response wrapping Read more at https learn hashicorp com tutorials vault approle best practices in vault auth methods secretid delivery best practices var vaultAddr Environment GetEnvironmentVariable VAULT ADDR if String IsNullOrEmpty vaultAddr throw new System ArgumentNullException Vault Address var roleId Environment GetEnvironmentVariable APPROLE ROLE ID if String IsNullOrEmpty roleId throw new System ArgumentNullException AppRole Role Id Get the path to wrapping token or fall back on default path string pathToToken String IsNullOrEmpty Environment GetEnvironmentVariable WRAPPING TOKEN PATH Environment GetEnvironmentVariable WRAPPING TOKEN PATH DefaultTokenPath string wrappingToken File ReadAllText pathToToken placed here by a trusted orchestrator We need to create two VaultClient objects for authenticating via AppRole The first is for using the unwrap utility We need to initialize the client with the wrapping token IAuthMethodInfo wrappedTokenAuthMethod new TokenAuthMethodInfo wrappingToken var vaultClientSettingsForUnwrapping new VaultClientSettings vaultAddr wrappedTokenAuthMethod IVaultClient vaultClientForUnwrapping new VaultClient vaultClientSettingsForUnwrapping We pass null here instead of the wrapping token to avoid depleting its single usage given that we already initialized our client with the wrapping token Secret Dictionary string object secretIdData vaultClientForUnwrapping V1 System UnwrapWrappedResponseDataAsync Dictionary string object null Result var secretId secretIdData Data secret id Grab the secret id We create a second VaultClient and initialize it with the AppRole auth method and our new credentials IAuthMethodInfo authMethod new AppRoleAuthMethodInfo roleId secretId ToString var vaultClientSettings new VaultClientSettings vaultAddr authMethod IVaultClient vaultClient new VaultClient vaultClientSettings We can retrieve the secret from VaultClient Secret SecretData kv2Secret null kv2Secret vaultClient V1 Secrets KeyValue V2 ReadSecretAsync path creds Result var password kv2Secret Data Data password return password ToString CodeBlockConfig CodeTabs |
vault page title Best practices for AppRole authentication Best practices for AppRole authentication Follow best practices for AppRole authentication to secure access and validate application workload identity layout docs | ---
layout: docs
page_title: Best practices for AppRole authentication
description: >-
Follow best practices for AppRole authentication to secure access and validate
application workload identity.
---
# Best practices for AppRole authentication
At the core of Vault's usage is authentication and authorization. Understanding the methods that Vault surfaces these to the client is the key to understanding how to configure and manage Vault.
- Vault provides authentication to a client by the use of [auth methods](/vault/docs/concepts/auth).
- Vault provides authorization to a client by the use of [policies](/vault/docs/concepts/policies).
Vault provides several internal and external authentication methods. External methods are called _trusted third-party authenticators_ such as AWS, LDAP, GitHub, and so on. A trusted third-party authenticator is not available in some situations, so Vault has an alternate approach which is **AppRole**. If another platform method of authentication is available via a trusted third-party authenticator, the best practice is to use that instead of AppRole.
This guide relies heavily on two fundamental principles for Vault: limiting both the blast-radius of an identity and the duration of authentication.
### Blast-radius of an identity
Vault is an identity-based secrets management solution, where access to a secret is based on the known and verified identity of a client. It is crucial that authenticating identities to Vault are identifiable and only have access to the secrets they are the users of. Secrets should never be proxied between Vault and the secret end-user and a client should never have access to secrets they are not the end-user of.
### Duration of authentication
When Vault verifies an entity's identity, Vault then provides that entity with a [token](/vault/docs/concepts/tokens). The client uses this token for all subsequent interactions with Vault to prove authentication, so this token should be both handled securely and have a limited lifetime. A token should only live for as long as access to the secrets it authorizes access to are needed.
## Glossary of terms
- **Authentication** - The process of confirming identity. Often abbreviated to _AuthN_
- **Authorization** - The process of verifying what an entity has access to and at what level. Often abbreviated to _AuthZ_
- **RoleID** - The semi-secret identifier for the role that will authenticate to Vault. Think of this as the _username_ portion of an authentication pair.
- **SecretID** - The secret identifier for the role that will authenticate to Vault. Think of this as the _password_ portion of an authentication pair.
- **AppRole role** - The role configured in Vault that contains the authorization and usage parameters for the authentication.
## What is AppRole auth method?
The AppRole authentication method is for machine authentication to Vault. Because AppRole is designed to be flexible, it has many ways to be configured. The burden of security is on the configurator rather than a trusted third party, as is the case in other Vault auth methods.
AppRole is not a trusted third-party authenticator, but a _trusted broker_ method. The difference is that in AppRole authentication, the onus of trust rests in a securely-managed broker system that brokers authentication between clients and Vault.
The central tenet of this security is that during the brokering of the authentication to Vault, the **RoleID** and **SecretID** are only ever together on the end-user system that needs to consume the secret.
In an AppRole authentication, there are three players:
- **Vault** - The Vault service
- **The broker** - This is the trusted and secured system that brokers the authentication.
- **The secret consumer** - This is the final consumer of the secret from Vault.
## Platform credential delivery method
To prevent any one system, other than the target client, from obtaining the complete set of credentials (RoleID and SecretID), the recommended implementation is to deliver those values separately through two different channels. This enables you to provide narrowly-scoped tokens to each trusted orchestrator to access either RoleID or SecretID, but never both.
### RoleID delivery best practices
RoleID is an identifier that selects the AppRole against which the other credentials are evaluated. Think of it as a username for an application; therefore, RoleID is not a secret value. It's a static UUID that identifies a specific role configuration. Generally, you create a role per application to ensure that each application will have a unique RoleID.
Because it is not a secret, you can embed the RoleID value into a machine image or container as a text file or environment variable.
For example:
- Build an image with [Packer](/packer/tutorials/) with RoleID stored as an environment variable.
- Use [Terraform](/terraform/tutorials/) to provision a machine embedded with RoleID.
There are a number of different patterns through which this value can be delivered.
The application running on the machine or container will read the RoleID from the file or environment variable to authenticate with Vault.
#### Policy requirement
An appropriate policy is required to read RoleID from Vault. For example, to get the RoleID for a role named, "jenkins", the policy should look as below.
```hcl
# Grant 'read' permission on the 'auth/approle/role/<role_name>/role-id' path
path "auth/approle/role/jenkins/role-id" {
capabilities = [ "read" ]
}
```
### SecretID delivery best practices
SecretID is a credential that is required by default for any login and is intended to always be secret. While RoleID is similar to a username, SecretID is equivalent to a password for its corresponding RoleID.
There are two additional considerations when distributing the SecretID, since it is a secret and should be secured so that only the intended recipient is able to read it.
1. Binding CIDRs
1. AppRole response wrapping
#### Binding CIDRs
When defining an AppRole, you can use the [`secretid_bound_cidrs`](/vault/api-docs/auth/approle#secret_id_bound_cidrs) parameter to specify blocks of IP addresses which can perform the login operation for this role. You can further limit the IP range per token using [`token_bound_cidrs`](/vault/api-docs/auth/approle#token_bound_cidrs).
**Example:**
```shell-session
$ vault write auth/approle/role/jenkins \
secret_id_bound_cidrs="0.0.0.0/0","127.0.0.1/32" \
secret_id_ttl=60m \
secret_id_num_uses=5 \
enable_local_secret_ids=false \
token_bound_cidrs="0.0.0.0/0","127.0.0.1/32" \
token_num_uses=10 \
token_ttl=1h \
token_max_ttl=3h \
token_type=default \
period="" \
policies="default","test"
```
<Tip title="CIDR consideration">
While there is no hard limit to how many CIDR blocks you can set using the
`token_bound_cidrs` parameter, there are limiting factors. One is the amount of
time it takes for the Vault to compare an IP with the list provided. Another is
the maximum request size of the HTTP when you create the list.
</Tip>
#### AppRole response wrapping
To guarantee confidentiality, integrity, and non-repudiation of SecretID, you can use the `-wrap-ttl` flag when generating the SecretID. Instead of providing the SecretID in plaintext, it puts it into a new token’s Cubbyhole with a token use count of 1. When the application attempts to read the SecretID, we can guarantee that only this application can read it.
**Example:** The following CLI command retrieves the SecretID for a role named, "jenkins". The generated SecretID is wrapped in a token which is valid for 60 seconds to unwrap.
```shell-session
$ vault write -wrap-ttl=60s -force auth/approle/role/jenkins/secret-id
Key Value
--- -----
wrapping_token: s.yzbznr9NlZNzsgEtz3SI56pX
wrapping_accessor: Smi4CO0Sdhn8FJvL8XvOT30y
wrapping_token_ttl: 1m
wrapping_token_creation_time: 2021-06-07 20:02:01.019838 -0700 PDT
wrapping_token_creation_path: auth/approle/role/jenkins/secret-id
```
Finally, you can monitor your audit logs for attempted read access of your SecretID. If Vault throws a use-limit error when an application tries to read the SecretID, you know that someone else has read the SecretID and alert on that. The audit logs will indicate where the SecretID read attempt originated.
#### Policy requirement
An appropriate policy is required to read SecretID from Vault. For example, to get the SecretID for a role named, "jenkins", the policy should look as below.
```hcl
# Grant 'update' permission on the 'auth/approle/role/<role_name>/secret-id' path
path "auth/approle/role/jenkins/secret-id" {
capabilities = [ "update" ]
}
```
## Token lifetime considerations
Tokens must be maintained client side and upon expiration can be renewed. For short lived workflows, traditionally tokens would be created with a lifetime that would match the average deploy time and left to expire, securing new tokens with each deployment.
A long token time-to-live (TTL) can cause out of memory when trying to purge millions of AppRole leases. To avoid this, we recommend that you reduce TTLs for AppRole tokens and implement token renewal where possible. You can increase the memory on the Vault server; however, it won't be a long-term solution.
In general, with any auth method, it's preferable for applications to keep using the same Vault token to fetch secrets repeatedly instead of a new authentication each time. Authentication is an expensive operation and results in a token that Vault must keep track of. If high authentication throughput, 1000s of authentications per second, are expected we recommend using batch tokens which are issued from memory and do not consume storage.
### Vault Agent
Consider running [Vault Agent](/vault/docs/agent-and-proxy/agent) on the client host, and let the agent manage the token's lifecycle. Vault Agent reduces the number of tokens used by the client applications. In addition, it eliminates the need to implement the Vault APIs to authenticate with Vault and renew the token TTL if necessary.
To learn more about Vault Agent, read the following tutorials:
- [Vault Agent with AWS](/vault/tutorials/vault-agent/agent-aws)
- [Vault Agent with Kubernetes](/vault/tutorials/kubernetes/agent-kubernetes)
- [Vault Agent Templates](/vault/tutorials/vault-agent/agent-templates)
- [Vault Agent Caching](/vault/tutorials/vault-agent/agent-caching)
## Jenkins CI/CD
When you are using Jenkins as a CI tool, Jenkins itself will need an identity; however, you should never have Jenkins log into Vault and pass a client token to the application via workflow. Jenkins needs to give the application its own identity so that the application gets its own secret. The best practice is to use the Vault Agent as much as possible with Jenkins so that Vault token is not managed by Jenkins. You can deliver a SecretID every morning or before every run for x number of uses. Let Vault Agent authenticate with Vault and get the token for Jenkins. Then, Jenkins uses that token for x number of operations against Vault.
A key benefit of AppRole for applications is that it enables you to more easily migrate the application between platforms.
When you use an AppRole for the application, the best practice is to obscure the RoleID from Jenkins but allow Jenkins to deliver a wrapped SecretID to the application.
### Usage workflow
Jenkins needs to run a job requiring some data classified as secret and stored in Vault. It has a master and a worker node where the worker node runs jobs on spawned container runners that are short-lived.
The process would look like:
1. Jenkins worker authenticates to Vault
2. Vault returns a token
3. Worker uses token to retrieve a wrapped SecretID for the **role** of the job it will spawn
4. Wrapped SecretID returned by Vault
5. Worker spawns job runner and passes wrapped SecretID as a variable to the job
6. Runner container requests unwrap of SecretID
7. Vault returns SecretID
8. Runner uses RoleID and SecretID to authenticate to Vault
9. Vault returns a token with policies that allow read of the required secrets
10. Runner uses the token to get secrets from Vault

Here are more details on the more complicated steps of that process.
<Note title="Secrets wrapping">
If you are unfamiliar with secrets wrapping, refer to the [response wraping](/vault/docs/concepts/response-wrapping) documentation.
</Note>
#### CI worker authenticates to Vault
The CI worker will need to authenticate to Vault to retrieve wrapped SecretIDs for the AppRoles of the jobs it will spawn.
If the worker can use a platform method of authentication, then the worker should use that. Otherwise, the only option is to pre-authenticate the worker to Vault in some other way.
#### Vault returns a token
The worker's Vault token should be of limited scope and should only retrieve wrapped SecretIDs. Because of this the worker could be pre-seeded with a long-lived Vault token or use a hard-coded RoleID and SecretID as this would present only a minor risk.
The policy the worker should have would be:
```hcl
path "auth/approle/role/+/secret*" {
capabilities = [ "create", "read", "update" ]
min_wrapping_ttl = "100s"
max_wrapping_ttl = "300s"
}
```
#### Worker uses token to retrieve a wrapped SecretID
The CI worker now needs to be able to retrieve a wrapped SecretID.
This command would be something like:
```shell-session
$ vault write -wrap-ttl=120s -f auth/approle/role/my-role/secret-id
```
Notice that the worker only needs to know the **role** for the job it is spawning. In the example above, that is `my-role` but not the RoleID.
#### Worker spawns job runner and passes wrapped SecretID
This could be achieved by passing the wrapped token as an environment variable. Below is an example of how to do this in Jenkins:
```plaintext
environment {
WRAPPED_SID = """$s{sh(
returnStdout: true,
Script: ‘curl --header "X-Vault-Token: $VAULT_TOKEN"
--header "X-Vault-Namespace: ${PROJ_NAME}_namespace"
--header "X-Vault-Wrap-Ttl: 300s"
$VAULT_ADDR/v1/auth/approle/role/$JOB_NAME/secret-id’
| jq -r '.wrap_info.token'
)}"""
}
```
#### Runner uses RoleID and SecretID to authenticate to Vault
The runner would authenticate to Vault and it would only receive the policy to read the exact secrets it needed. It could not get anything else. An example policy would be:
```hcl
path "kv/my-role_secrets/*" {
capabilities = [ "read" ]
}
```
#### Implementation specifics
As additional security measures, create the required role for the App bearing in mind the following:
- [`secret_id_bound_cidrs` (array: [])](/vault/api-docs/auth/approle#secret_id_bound_cidrs) - Comma-separated string or list of CIDR blocks; if set, specifies blocks of IP addresses which can perform the login operation.
- [`secret_id_num_uses` (integer: 0)](/vault/api-docs/auth/approle#secret_id_num_uses) - Number of times any particular SecretID can be used to fetch a token from this [AppRole](#vault-approle-overview), after which the SecretID will expire. A value of zero will allow unlimited uses.
<Note title="Recommendation">
For best security, set `secret_id_num_uses` to `1` use. Also, consider changing `secret_id_bound_cidrs` to restrict the source IP range of the connecting devices.
</Note>
## Anti-patterns
Consider avoiding these anti-patterns when using Vault's AppRole auth method.
### CI worker retrieves secrets
The CI worker could just authenticate to Vault and retrieve the secrets for the job and pass these to the runner, but this would break the first of the two best practices listed above.
The CI worker may likely have to run many different types of jobs, many of which require secrets. If you use this method, the worker would have to have the authorization (policy) to retrieve many secrets, none of which is the consumer. Additionally, if a single secret were to become compromised, then there would be no way to tie an identity to it and initiate break-glass procedures on that identity. So all secrets would have to be considered compromised.
### CI worker passes RoleID and SecretID to the runner
The worker could be authorized to Vault to retrieve the RoleId and SecretID and pass both to the runner to use. While this prevents the worker from having Vault's authorization to retrieve all secrets, it has that capability as it has both RoleID and SecretID. This is against best practice.
### CI worker passes a Vault token to the runner
The worker could be authorized to Vault to generate child tokens that have the authorization to retrieve secrets for the pipeline.
Again, this avoids authorization to Vault to retrieve secrets for the worker, but the worker will have access to the child tokens that would have authorization and so it is against best practices.
## Security considerations
In any trusted broker situation, the broker (in this case, the Jenkins worker) must be secured and treated as a critical system. This means that users should have minimal access to it and the access should be closely monitored and audited.
Also, as the Vault audit logs provide time-stamped events, monitor the whole process with alerts on two events:
- When a wrapped SecretID is requested for an AppRole, and no Jenkins job is running
- When the Jenkins slave attempts to unwrap the token and Vault refuses as the token has already been used
In both cases, this shows that the trusted-broker workflow has likely been compromised and the event should investigated.
## Reference materials
- [How (and Why) to Use AppRole Correctly in HashiCorp Vault](https://www.hashicorp.com/blog/how-and-why-to-use-approle-correctly-in-hashicorp-vault)
- [Response wrapping concept](/vault/docs/concepts/response-wrapping)
- [ACL policies](/vault/docs/concepts/policies)
- [Token periods and TTLs](/vault/docs/concepts/tokens#token-time-to-live-periodic-tokens-and-explicit-max-ttls) | vault | layout docs page title Best practices for AppRole authentication description Follow best practices for AppRole authentication to secure access and validate application workload identity Best practices for AppRole authentication At the core of Vault s usage is authentication and authorization Understanding the methods that Vault surfaces these to the client is the key to understanding how to configure and manage Vault Vault provides authentication to a client by the use of auth methods vault docs concepts auth Vault provides authorization to a client by the use of policies vault docs concepts policies Vault provides several internal and external authentication methods External methods are called trusted third party authenticators such as AWS LDAP GitHub and so on A trusted third party authenticator is not available in some situations so Vault has an alternate approach which is AppRole If another platform method of authentication is available via a trusted third party authenticator the best practice is to use that instead of AppRole This guide relies heavily on two fundamental principles for Vault limiting both the blast radius of an identity and the duration of authentication Blast radius of an identity Vault is an identity based secrets management solution where access to a secret is based on the known and verified identity of a client It is crucial that authenticating identities to Vault are identifiable and only have access to the secrets they are the users of Secrets should never be proxied between Vault and the secret end user and a client should never have access to secrets they are not the end user of Duration of authentication When Vault verifies an entity s identity Vault then provides that entity with a token vault docs concepts tokens The client uses this token for all subsequent interactions with Vault to prove authentication so this token should be both handled securely and have a limited lifetime A token should only live for as long as access to the secrets it authorizes access to are needed Glossary of terms Authentication The process of confirming identity Often abbreviated to AuthN Authorization The process of verifying what an entity has access to and at what level Often abbreviated to AuthZ RoleID The semi secret identifier for the role that will authenticate to Vault Think of this as the username portion of an authentication pair SecretID The secret identifier for the role that will authenticate to Vault Think of this as the password portion of an authentication pair AppRole role The role configured in Vault that contains the authorization and usage parameters for the authentication What is AppRole auth method The AppRole authentication method is for machine authentication to Vault Because AppRole is designed to be flexible it has many ways to be configured The burden of security is on the configurator rather than a trusted third party as is the case in other Vault auth methods AppRole is not a trusted third party authenticator but a trusted broker method The difference is that in AppRole authentication the onus of trust rests in a securely managed broker system that brokers authentication between clients and Vault The central tenet of this security is that during the brokering of the authentication to Vault the RoleID and SecretID are only ever together on the end user system that needs to consume the secret In an AppRole authentication there are three players Vault The Vault service The broker This is the trusted and secured system that brokers the authentication The secret consumer This is the final consumer of the secret from Vault Platform credential delivery method To prevent any one system other than the target client from obtaining the complete set of credentials RoleID and SecretID the recommended implementation is to deliver those values separately through two different channels This enables you to provide narrowly scoped tokens to each trusted orchestrator to access either RoleID or SecretID but never both RoleID delivery best practices RoleID is an identifier that selects the AppRole against which the other credentials are evaluated Think of it as a username for an application therefore RoleID is not a secret value It s a static UUID that identifies a specific role configuration Generally you create a role per application to ensure that each application will have a unique RoleID Because it is not a secret you can embed the RoleID value into a machine image or container as a text file or environment variable For example Build an image with Packer packer tutorials with RoleID stored as an environment variable Use Terraform terraform tutorials to provision a machine embedded with RoleID There are a number of different patterns through which this value can be delivered The application running on the machine or container will read the RoleID from the file or environment variable to authenticate with Vault Policy requirement An appropriate policy is required to read RoleID from Vault For example to get the RoleID for a role named jenkins the policy should look as below hcl Grant read permission on the auth approle role role name role id path path auth approle role jenkins role id capabilities read SecretID delivery best practices SecretID is a credential that is required by default for any login and is intended to always be secret While RoleID is similar to a username SecretID is equivalent to a password for its corresponding RoleID There are two additional considerations when distributing the SecretID since it is a secret and should be secured so that only the intended recipient is able to read it 1 Binding CIDRs 1 AppRole response wrapping Binding CIDRs When defining an AppRole you can use the secretid bound cidrs vault api docs auth approle secret id bound cidrs parameter to specify blocks of IP addresses which can perform the login operation for this role You can further limit the IP range per token using token bound cidrs vault api docs auth approle token bound cidrs Example shell session vault write auth approle role jenkins secret id bound cidrs 0 0 0 0 0 127 0 0 1 32 secret id ttl 60m secret id num uses 5 enable local secret ids false token bound cidrs 0 0 0 0 0 127 0 0 1 32 token num uses 10 token ttl 1h token max ttl 3h token type default period policies default test Tip title CIDR consideration While there is no hard limit to how many CIDR blocks you can set using the token bound cidrs parameter there are limiting factors One is the amount of time it takes for the Vault to compare an IP with the list provided Another is the maximum request size of the HTTP when you create the list Tip AppRole response wrapping To guarantee confidentiality integrity and non repudiation of SecretID you can use the wrap ttl flag when generating the SecretID Instead of providing the SecretID in plaintext it puts it into a new token s Cubbyhole with a token use count of 1 When the application attempts to read the SecretID we can guarantee that only this application can read it Example The following CLI command retrieves the SecretID for a role named jenkins The generated SecretID is wrapped in a token which is valid for 60 seconds to unwrap shell session vault write wrap ttl 60s force auth approle role jenkins secret id Key Value wrapping token s yzbznr9NlZNzsgEtz3SI56pX wrapping accessor Smi4CO0Sdhn8FJvL8XvOT30y wrapping token ttl 1m wrapping token creation time 2021 06 07 20 02 01 019838 0700 PDT wrapping token creation path auth approle role jenkins secret id Finally you can monitor your audit logs for attempted read access of your SecretID If Vault throws a use limit error when an application tries to read the SecretID you know that someone else has read the SecretID and alert on that The audit logs will indicate where the SecretID read attempt originated Policy requirement An appropriate policy is required to read SecretID from Vault For example to get the SecretID for a role named jenkins the policy should look as below hcl Grant update permission on the auth approle role role name secret id path path auth approle role jenkins secret id capabilities update Token lifetime considerations Tokens must be maintained client side and upon expiration can be renewed For short lived workflows traditionally tokens would be created with a lifetime that would match the average deploy time and left to expire securing new tokens with each deployment A long token time to live TTL can cause out of memory when trying to purge millions of AppRole leases To avoid this we recommend that you reduce TTLs for AppRole tokens and implement token renewal where possible You can increase the memory on the Vault server however it won t be a long term solution In general with any auth method it s preferable for applications to keep using the same Vault token to fetch secrets repeatedly instead of a new authentication each time Authentication is an expensive operation and results in a token that Vault must keep track of If high authentication throughput 1000s of authentications per second are expected we recommend using batch tokens which are issued from memory and do not consume storage Vault Agent Consider running Vault Agent vault docs agent and proxy agent on the client host and let the agent manage the token s lifecycle Vault Agent reduces the number of tokens used by the client applications In addition it eliminates the need to implement the Vault APIs to authenticate with Vault and renew the token TTL if necessary To learn more about Vault Agent read the following tutorials Vault Agent with AWS vault tutorials vault agent agent aws Vault Agent with Kubernetes vault tutorials kubernetes agent kubernetes Vault Agent Templates vault tutorials vault agent agent templates Vault Agent Caching vault tutorials vault agent agent caching Jenkins CI CD When you are using Jenkins as a CI tool Jenkins itself will need an identity however you should never have Jenkins log into Vault and pass a client token to the application via workflow Jenkins needs to give the application its own identity so that the application gets its own secret The best practice is to use the Vault Agent as much as possible with Jenkins so that Vault token is not managed by Jenkins You can deliver a SecretID every morning or before every run for x number of uses Let Vault Agent authenticate with Vault and get the token for Jenkins Then Jenkins uses that token for x number of operations against Vault A key benefit of AppRole for applications is that it enables you to more easily migrate the application between platforms When you use an AppRole for the application the best practice is to obscure the RoleID from Jenkins but allow Jenkins to deliver a wrapped SecretID to the application Usage workflow Jenkins needs to run a job requiring some data classified as secret and stored in Vault It has a master and a worker node where the worker node runs jobs on spawned container runners that are short lived The process would look like 1 Jenkins worker authenticates to Vault 2 Vault returns a token 3 Worker uses token to retrieve a wrapped SecretID for the role of the job it will spawn 4 Wrapped SecretID returned by Vault 5 Worker spawns job runner and passes wrapped SecretID as a variable to the job 6 Runner container requests unwrap of SecretID 7 Vault returns SecretID 8 Runner uses RoleID and SecretID to authenticate to Vault 9 Vault returns a token with policies that allow read of the required secrets 10 Runner uses the token to get secrets from Vault AppRole Example img approle best practices png Here are more details on the more complicated steps of that process Note title Secrets wrapping If you are unfamiliar with secrets wrapping refer to the response wraping vault docs concepts response wrapping documentation Note CI worker authenticates to Vault The CI worker will need to authenticate to Vault to retrieve wrapped SecretIDs for the AppRoles of the jobs it will spawn If the worker can use a platform method of authentication then the worker should use that Otherwise the only option is to pre authenticate the worker to Vault in some other way Vault returns a token The worker s Vault token should be of limited scope and should only retrieve wrapped SecretIDs Because of this the worker could be pre seeded with a long lived Vault token or use a hard coded RoleID and SecretID as this would present only a minor risk The policy the worker should have would be hcl path auth approle role secret capabilities create read update min wrapping ttl 100s max wrapping ttl 300s Worker uses token to retrieve a wrapped SecretID The CI worker now needs to be able to retrieve a wrapped SecretID This command would be something like shell session vault write wrap ttl 120s f auth approle role my role secret id Notice that the worker only needs to know the role for the job it is spawning In the example above that is my role but not the RoleID Worker spawns job runner and passes wrapped SecretID This could be achieved by passing the wrapped token as an environment variable Below is an example of how to do this in Jenkins plaintext environment WRAPPED SID s sh returnStdout true Script curl header X Vault Token VAULT TOKEN header X Vault Namespace PROJ NAME namespace header X Vault Wrap Ttl 300s VAULT ADDR v1 auth approle role JOB NAME secret id jq r wrap info token Runner uses RoleID and SecretID to authenticate to Vault The runner would authenticate to Vault and it would only receive the policy to read the exact secrets it needed It could not get anything else An example policy would be hcl path kv my role secrets capabilities read Implementation specifics As additional security measures create the required role for the App bearing in mind the following secret id bound cidrs array vault api docs auth approle secret id bound cidrs Comma separated string or list of CIDR blocks if set specifies blocks of IP addresses which can perform the login operation secret id num uses integer 0 vault api docs auth approle secret id num uses Number of times any particular SecretID can be used to fetch a token from this AppRole vault approle overview after which the SecretID will expire A value of zero will allow unlimited uses Note title Recommendation For best security set secret id num uses to 1 use Also consider changing secret id bound cidrs to restrict the source IP range of the connecting devices Note Anti patterns Consider avoiding these anti patterns when using Vault s AppRole auth method CI worker retrieves secrets The CI worker could just authenticate to Vault and retrieve the secrets for the job and pass these to the runner but this would break the first of the two best practices listed above The CI worker may likely have to run many different types of jobs many of which require secrets If you use this method the worker would have to have the authorization policy to retrieve many secrets none of which is the consumer Additionally if a single secret were to become compromised then there would be no way to tie an identity to it and initiate break glass procedures on that identity So all secrets would have to be considered compromised CI worker passes RoleID and SecretID to the runner The worker could be authorized to Vault to retrieve the RoleId and SecretID and pass both to the runner to use While this prevents the worker from having Vault s authorization to retrieve all secrets it has that capability as it has both RoleID and SecretID This is against best practice CI worker passes a Vault token to the runner The worker could be authorized to Vault to generate child tokens that have the authorization to retrieve secrets for the pipeline Again this avoids authorization to Vault to retrieve secrets for the worker but the worker will have access to the child tokens that would have authorization and so it is against best practices Security considerations In any trusted broker situation the broker in this case the Jenkins worker must be secured and treated as a critical system This means that users should have minimal access to it and the access should be closely monitored and audited Also as the Vault audit logs provide time stamped events monitor the whole process with alerts on two events When a wrapped SecretID is requested for an AppRole and no Jenkins job is running When the Jenkins slave attempts to unwrap the token and Vault refuses as the token has already been used In both cases this shows that the trusted broker workflow has likely been compromised and the event should investigated Reference materials How and Why to Use AppRole Correctly in HashiCorp Vault https www hashicorp com blog how and why to use approle correctly in hashicorp vault Response wrapping concept vault docs concepts response wrapping ACL policies vault docs concepts policies Token periods and TTLs vault docs concepts tokens token time to live periodic tokens and explicit max ttls |
vault of user verification to your authentication workflow for Vault Use basic multi factor authentication MFA with Vault to add an extra level Set up login MFA page title Set up login MFA layout docs | ---
layout: docs
page_title: Set up login MFA
description: >-
Use basic multi-factor authentication (MFA) with Vault to add an extra level
of user verification to your authentication workflow for Vault.
---
# Set up login MFA
The underlying identity system in Vault supports multi-factor authentication
(MFA) for authenticating to an auth method using different authentication types.
MFA implementation | Required Vault edition
----------------------------------------- | -----------------------
Login MFA | Vault Community
[Step-up MFA](/vault/docs/enterprise/mfa) | Vault Enterprise
## Login MFA types
MFA in Vault includes the following login types:
~> **NOTE:** The [Token](/vault/docs/auth/token) auth method cannot be configured with Vault's built-in Login MFA feature.
- `Time-based One-time Password (TOTP)` - If configured and enabled on a login path,
this would require a TOTP passcode along with a Vault token to be presented
while invoking the API login request. The passcode will be validated against the
TOTP key present in the caller's identify in Vault.
- `Okta` - If Okta push is configured and enabled on a login path, then the enrolled
device of the user will receive a push notification to either approve or deny access
to the API. The Okta username will be derived from the caller identity's
alias.
- `Duo` - If Duo push is configured and enabled on a login path, then the enrolled
device of the user will receive a push notification to either approve or deny access
to the API. The Duo username will be derived from the caller identity's
alias. Note that Duo could also be configured to use passcodes for authentication.
- `PingID` - If PingID push is configured and enabled on a login path, the
enrolled device of the user will receive a push notification to either approve or deny
access to the API. The PingID username will be derived from the caller
identity's alias.
## Login MFA procedure
~> **NOTE:** Vault's built-in Login MFA feature does not protect against brute forcing of
TOTP passcodes by default. We recommend that per-client [rate limits](/vault/docs/concepts/resource-quotas)
are applied to the relevant login and/or mfa paths (e.g. `/sys/mfa/validate`). External MFA
methods (`Duo`, `Ping` and `Okta`) may already provide configurable rate limiting. Rate limiting of
Login MFA paths are enforced by default in Vault 1.10.1 and above.
Login MFA can be configured to secure further authenticating to an auth method. To enable login
MFA, an MFA method needs to be configured. Please see [Login MFA API](/vault/api-docs/secret/identity/mfa) for details
on how to configure an MFA method. Once an MFA method is configured, an operator can configure an MFA enforcement using the returned unique MFA method ID.
Please see [Login MFA Enforcement API](/vault/api-docs/secret/identity/mfa/login-enforcement)
for details on how to configure an MFA enforcement config. MFA could be enforced for an entity, a group of
entities, a specific auth method accessor, or an auth method type. A login request that matches
any MFA enforcement restrictions is subject to further MFA validation,
such as a one-time passcode, before being authenticated.
There are two ways to validate a login request that is subject to MFA validation.
### Single-Phase login
In the Single-phase login, the required MFA information is embedded in a login request using
the `X-Vault-MFA` header. In this case, the MFA validation is done
as a part of the login request.
MFA credentials are retrieved from the `X-Vault-MFA` HTTP header. Before Vault 1.13.0, the format of
the header is `mfa_method_id[:passcode]` for TOTP, Okta, and PingID. However, for Duo, it is `mfa_method_id[:passcode=<passcode>]`.
The item in the `[]` is optional. From Vault 1.13.0, the format is consistent for all supported MFA methods, and one can use either of the above two formats.
If there are multiple MFA methods that need to be validated, a user can pass in multiple `X-Vault-MFA` HTTP headers.
#### Sample request
```shell-session
$ curl \
--header "X-Vault-Token: ..." \
--header "X-Vault-MFA: d16fd3c2-50de-0b9b-eed3-0301dadeca10:695452" \
http://127.0.0.1:8200/v1/auth/userpass/login/alice
```
If an MFA method does not require a passcode, the login request MFA header only contains the method ID.
```shell-session
$ curl \
--header "X-Vault-Token: ..." \
--header "X-Vault-MFA: d16fd3c2-50de-0b9b-eed3-0301dadeca10" \
http://127.0.0.1:8200/v1/auth/userpass/login/alice
```
Starting in Vault 1.13.0, an operator can configure a name for an MFA method.
This name should be unique in the namespace in which the MFA method is configured.
The MFA method name can be used in the MFA header.
```shell-session
$ curl \
--header "X-Vault-Token: ..." \
--header "X-Vault-MFA: sample_mfa_method_name:695452" \
http://127.0.0.1:8200/v1/auth/userpass/login/alice
```
In cases where the MFA method is configured in a specific namespace, the MFA method name should be prefixed with the namespace path.
Below shows an example where an MFA method is configured in `ns1`.
```shell-session
$ curl \
--header "X-Vault-Token: ..." \
--header "X-Vault-MFA: ns1/sample_mfa_method_name:695452" \
http://127.0.0.1:8200/v1/auth/userpass/login/alice
```
### Two-Phase login
The more conventional and prevalent MFA method is a two-request mechanism, also referred to as Two-phase Login MFA.
In Two-phase login, the `X-Vault-MFA` header is not provided in the request. In this case, after sending a regular login request,
the user receives an auth response in which MFA requirements are included. MFA requirements contain an MFA request ID
which identifies the login request that needs validation. In addition, MFA requirements contain MFA constraints
that determine which MFA types should be used to validate the request, the corresponding method IDs, and
a boolean value showing whether the MFA method uses passcodes or not. MFA constraints form a nested map in MFA Requirement
and represent all MFA enforcements that match a login request. While the example below is for the userpass login,
note that this can affect the login response on any auth mount protected by MFA validation.
#### Sample Two-Phase login response
```json
{
"request_id": "1044c151-13ea-1cf5-f6ed-000c42efd477",
"lease_id": "",
"lease_duration": 0,
"renewable": false,
"data": null,
"warnings": [
"A login request was issued that is subject to MFA validation. Please make sure to validate the login by sending another request to mfa/validate endpoint."
],
"auth": {
"client_token": "",
"accessor": "",
"policies": null,
"token_policies": null,
"identity_policies": null,
"metadata": null,
"orphan": false,
"entity_id": "",
"lease_duration": 0,
"renewable": false,
"mfa_requirement": {
"mfa_request_id": "d0c9eec7-6921-8cc0-be62-202b289ef163",
"mfa_constraints": {
"enforcementConfigUserpass": {
"any": [
{
"type": "totp",
"id": "820997b3-110e-c251-7e8b-ff4aa428a6e1",
"uses_passcode": true,
"name": "sample_mfa_method_name",
}
]
}
}
}
}
}
```
Note that the `uses_passcode` boolean value will always show true for TOTP, and false for Okta and PingID.
For Duo method, the value can be configured as part of the method configuration, using the `use_passcode` parameter.
Please see [Duo API](/vault/api-docs/secret/identity/mfa/duo) for details
on how to configure the boolean value for Duo.
To validate the MFA restricted login request, the user sends a second request to the [validate](/vault/api-docs/system/mfa/validate)
endpoint including the MFA request ID and MFA payload. MFA payload contains a map of methodIDs and their associated credentials.
If the configured MFA methods, such as PingID, Okta, and Duo, do not require a passcode, the associated
credentials will be a list with one empty string.
#### Sample payload
```json
{
"mfa_request_id": "5879c74a-1418-1948-7be9-97b209d693a7",
"mfa_payload": {
"d16fd3c2-50de-0b9b-eed3-0301dadeca10": ["910201"]
}
}
```
If an MFA method is configured in a namespace, the MFA method name prefixed with the namespace path can be used in the validation payload.
```json
{
"mfa_request_id": "5879c74a-1418-1948-7be9-97b209d693a7",
"mfa_payload": {
"ns1/sample_mfa_method_name": ["910201"]
}
}
```
#### Sample request
```shell-session
$ curl \
--header "X-Vault-Token: ..." \
--request POST \
--data @payload.json \
http://127.0.0.1:8200/v1/sys/mfa/validate
```
#### Sample CLI request
A user is also able to use the CLI write command to validate the login request.
```shell-session
$ vault write sys/mfa/validate -format=json @payload.json
```
#### Interactive CLI for login MFA
Vault supports an interactive way of authenticating to an auth method using CLI only if the
login request is subject to a single MFA method validation. In this situation, if the MFA method
is configured to use passcodes, after sending a regular login request, the user is prompted to
insert the passcode. Upon successful MFA validation, a client token is returned.
If the configured MFA methods, such as PingID, Okta, and Duo, do not require a passcode and have out of band
mechanisms for verifying the extra factor, the user is notified to check their authenticator application.
This alleviates a user from sending the second request separately to validate a login request.
To disable the interactive login experience, a user needs to pass in the `non-interactive` flag to the login request.
```shell-session
$ vault write -non-interactive sys/mfa/validate -format=json @payload.json
```
To get started with Login MFA, refer to the [Login MFA](/vault/tutorials/auth-methods/multi-factor-authentication) tutorial.
### TOTP passcode validation rate limit
Rate limiting of Login MFA paths are enforced by default in Vault 1.10.1 and above.
By default, Vault allows for 5 consecutive failed TOTP passcode validation.
This value can also be configured by adding `max_validation_attempts` to the TOTP configuration.
If the number of consecutive failed TOTP passcode validation exceeds the configured value, the user
needs to wait until a fresh TOTP passcode is available. | vault | layout docs page title Set up login MFA description Use basic multi factor authentication MFA with Vault to add an extra level of user verification to your authentication workflow for Vault Set up login MFA The underlying identity system in Vault supports multi factor authentication MFA for authenticating to an auth method using different authentication types MFA implementation Required Vault edition Login MFA Vault Community Step up MFA vault docs enterprise mfa Vault Enterprise Login MFA types MFA in Vault includes the following login types NOTE The Token vault docs auth token auth method cannot be configured with Vault s built in Login MFA feature Time based One time Password TOTP If configured and enabled on a login path this would require a TOTP passcode along with a Vault token to be presented while invoking the API login request The passcode will be validated against the TOTP key present in the caller s identify in Vault Okta If Okta push is configured and enabled on a login path then the enrolled device of the user will receive a push notification to either approve or deny access to the API The Okta username will be derived from the caller identity s alias Duo If Duo push is configured and enabled on a login path then the enrolled device of the user will receive a push notification to either approve or deny access to the API The Duo username will be derived from the caller identity s alias Note that Duo could also be configured to use passcodes for authentication PingID If PingID push is configured and enabled on a login path the enrolled device of the user will receive a push notification to either approve or deny access to the API The PingID username will be derived from the caller identity s alias Login MFA procedure NOTE Vault s built in Login MFA feature does not protect against brute forcing of TOTP passcodes by default We recommend that per client rate limits vault docs concepts resource quotas are applied to the relevant login and or mfa paths e g sys mfa validate External MFA methods Duo Ping and Okta may already provide configurable rate limiting Rate limiting of Login MFA paths are enforced by default in Vault 1 10 1 and above Login MFA can be configured to secure further authenticating to an auth method To enable login MFA an MFA method needs to be configured Please see Login MFA API vault api docs secret identity mfa for details on how to configure an MFA method Once an MFA method is configured an operator can configure an MFA enforcement using the returned unique MFA method ID Please see Login MFA Enforcement API vault api docs secret identity mfa login enforcement for details on how to configure an MFA enforcement config MFA could be enforced for an entity a group of entities a specific auth method accessor or an auth method type A login request that matches any MFA enforcement restrictions is subject to further MFA validation such as a one time passcode before being authenticated There are two ways to validate a login request that is subject to MFA validation Single Phase login In the Single phase login the required MFA information is embedded in a login request using the X Vault MFA header In this case the MFA validation is done as a part of the login request MFA credentials are retrieved from the X Vault MFA HTTP header Before Vault 1 13 0 the format of the header is mfa method id passcode for TOTP Okta and PingID However for Duo it is mfa method id passcode passcode The item in the is optional From Vault 1 13 0 the format is consistent for all supported MFA methods and one can use either of the above two formats If there are multiple MFA methods that need to be validated a user can pass in multiple X Vault MFA HTTP headers Sample request shell session curl header X Vault Token header X Vault MFA d16fd3c2 50de 0b9b eed3 0301dadeca10 695452 http 127 0 0 1 8200 v1 auth userpass login alice If an MFA method does not require a passcode the login request MFA header only contains the method ID shell session curl header X Vault Token header X Vault MFA d16fd3c2 50de 0b9b eed3 0301dadeca10 http 127 0 0 1 8200 v1 auth userpass login alice Starting in Vault 1 13 0 an operator can configure a name for an MFA method This name should be unique in the namespace in which the MFA method is configured The MFA method name can be used in the MFA header shell session curl header X Vault Token header X Vault MFA sample mfa method name 695452 http 127 0 0 1 8200 v1 auth userpass login alice In cases where the MFA method is configured in a specific namespace the MFA method name should be prefixed with the namespace path Below shows an example where an MFA method is configured in ns1 shell session curl header X Vault Token header X Vault MFA ns1 sample mfa method name 695452 http 127 0 0 1 8200 v1 auth userpass login alice Two Phase login The more conventional and prevalent MFA method is a two request mechanism also referred to as Two phase Login MFA In Two phase login the X Vault MFA header is not provided in the request In this case after sending a regular login request the user receives an auth response in which MFA requirements are included MFA requirements contain an MFA request ID which identifies the login request that needs validation In addition MFA requirements contain MFA constraints that determine which MFA types should be used to validate the request the corresponding method IDs and a boolean value showing whether the MFA method uses passcodes or not MFA constraints form a nested map in MFA Requirement and represent all MFA enforcements that match a login request While the example below is for the userpass login note that this can affect the login response on any auth mount protected by MFA validation Sample Two Phase login response json request id 1044c151 13ea 1cf5 f6ed 000c42efd477 lease id lease duration 0 renewable false data null warnings A login request was issued that is subject to MFA validation Please make sure to validate the login by sending another request to mfa validate endpoint auth client token accessor policies null token policies null identity policies null metadata null orphan false entity id lease duration 0 renewable false mfa requirement mfa request id d0c9eec7 6921 8cc0 be62 202b289ef163 mfa constraints enforcementConfigUserpass any type totp id 820997b3 110e c251 7e8b ff4aa428a6e1 uses passcode true name sample mfa method name Note that the uses passcode boolean value will always show true for TOTP and false for Okta and PingID For Duo method the value can be configured as part of the method configuration using the use passcode parameter Please see Duo API vault api docs secret identity mfa duo for details on how to configure the boolean value for Duo To validate the MFA restricted login request the user sends a second request to the validate vault api docs system mfa validate endpoint including the MFA request ID and MFA payload MFA payload contains a map of methodIDs and their associated credentials If the configured MFA methods such as PingID Okta and Duo do not require a passcode the associated credentials will be a list with one empty string Sample payload json mfa request id 5879c74a 1418 1948 7be9 97b209d693a7 mfa payload d16fd3c2 50de 0b9b eed3 0301dadeca10 910201 If an MFA method is configured in a namespace the MFA method name prefixed with the namespace path can be used in the validation payload json mfa request id 5879c74a 1418 1948 7be9 97b209d693a7 mfa payload ns1 sample mfa method name 910201 Sample request shell session curl header X Vault Token request POST data payload json http 127 0 0 1 8200 v1 sys mfa validate Sample CLI request A user is also able to use the CLI write command to validate the login request shell session vault write sys mfa validate format json payload json Interactive CLI for login MFA Vault supports an interactive way of authenticating to an auth method using CLI only if the login request is subject to a single MFA method validation In this situation if the MFA method is configured to use passcodes after sending a regular login request the user is prompted to insert the passcode Upon successful MFA validation a client token is returned If the configured MFA methods such as PingID Okta and Duo do not require a passcode and have out of band mechanisms for verifying the extra factor the user is notified to check their authenticator application This alleviates a user from sending the second request separately to validate a login request To disable the interactive login experience a user needs to pass in the non interactive flag to the login request shell session vault write non interactive sys mfa validate format json payload json To get started with Login MFA refer to the Login MFA vault tutorials auth methods multi factor authentication tutorial TOTP passcode validation rate limit Rate limiting of Login MFA paths are enforced by default in Vault 1 10 1 and above By default Vault allows for 5 consecutive failed TOTP passcode validation This value can also be configured by adding max validation attempts to the TOTP configuration If the number of consecutive failed TOTP passcode validation exceeds the configured value the user needs to wait until a fresh TOTP passcode is available |
vault Login MFA FAQ Commonly questions about Vault login MFA and multi factor authentication layout docs This FAQ section contains frequently asked questions about the Login MFA feature page title Login MFA FAQ | ---
layout: docs
page_title: Login MFA FAQ
description: >-
Commonly questions about Vault login MFA and multi-factor authentication.
---
# Login MFA FAQ
This FAQ section contains frequently asked questions about the Login MFA feature.
- [Q: What MFA features can I access if I upgrade to Vault version 1.10?](#q-what-mfa-features-can-i-access-if-i-upgrade-to-vault-version-1-10)
- [Q: What are the various MFA workflows that are available to me as a Vault user as of Vault version 1.10, and how are they different?](#q-what-are-the-various-mfa-workflows-that-are-available-to-me-as-a-vault-user-as-of-vault-version-1-10-and-how-are-they-different)
- [Q: What is the Legacy MFA feature?](#q-what-is-the-legacy-mfa-feature)
- [Q: Will HCP Vault Dedicated support MFA?](#q-will-hcp-vault-support-mfa)
- [Q: What is Single-Phase MFA vs. Two-Phase MFA?](#q-what-is-single-phase-mfa-vs-two-phase-mfa)
- [Q: Are there new MFA API endpoints introduced as part of the new Vault version 1.10 MFA for login functionality?](#q-are-there-new-mfa-api-endpoints-introduced-as-part-of-the-new-vault-version-1-10-mfa-for-login-functionality)
- [Q: How do MFA configurations differ between the Login MFA and Step-up Enterprise MFA?](#q-how-do-mfa-configurations-differ-between-the-login-mfa-and-step-up-enterprise-mfa)
- [Q: What are the ways to configure the various MFA workflows?](#q-what-are-the-ways-to-configure-the-various-mfa-workflows)
- [Q: What MFA mechanism is used with the different MFA workflows in Vault version 1.10?](#q-which-mfa-mechanism-is-used-with-the-different-mfa-workflows-in-vault-version-1-10)
- [Q: Are namespaces supported with the MFA workflows that Vault has as of Vault version 1.10?](#q-are-namespaces-supported-with-the-mfa-workflows-that-vault-has-as-of-vault-version-1-10)
- [Q: I use the Vault Agent. Does MFA pose any challenges for me?](#q-i-use-the-vault-agent-does-mfa-pose-any-challenges-for-me)
- [Q: I am a Step-up Enterprise MFA user using MFA for login. Should I migrate to the new Login MFA?](#q-i-am-a-step-up-enterprise-mfa-user-using-mfa-for-login-should-i-migrate-to-the-new-login-mfa)
- [Q: I am a Step-up Enterprise MFA user using MFA for login. What are the steps to migrate to Login MFA?](#q-i-am-a-step-up-enterprise-mfa-user-using-mfa-for-login-what-are-the-steps-to-migrate-to-login-mfa)
### Q: what MFA features can i access if i upgrade to Vault version 1.10?
Vault supports Step-up Enterprise MFA as part of our Enterprise edition. The Step-up Enterprise MFA provides MFA on login, or for step-up access to sensitive resources in Vault using ACL and Sentinel policies, and is configurable through the CLI/API.
Starting with Vault version 1.10, Vault Community Edition provides [MFA on login](/vault/docs/auth/login-mfa) only. This is also available with Vault Enterprise and configurable through the CLI/API.
The Step-up Enterprise MFA will co-exist with the newly introduced Login MFA starting with Vault version 1.10.
### Q: what are the various MFA workflows that are available to me as a Vault user as of Vault version 1.10, and how are they different?
| MFA workflow | What does it do? | Who manages the MFA? | Community vs. Enterprise Support |
| ---------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | ----------------------------- |
| [Login MFA](/vault/docs/auth/login-mfa) | MFA in Vault Community Edition provides MFA on login. CLI, API, and UI-based login are supported. | MFA is managed by Vault | Supported in Vault Community Edition |
| [Okta Auth MFA](/vault/docs/auth/okta#mfa) | This is MFA as part of [Okta Auth method](/vault/docs/auth/okta) in Vault Community Edition, where MFA is enforced by Okta on login. MFA must be satisfied for authentication to be successful. This is different from the Okta MFA method used with Login MFA and Step-up Enterprise MFA. CLI/API login are supported. | MFA is managed externally by Okta | Supported in Vault Community Edition |
| [Step-up Enterprise MFA](/vault/docs/enterprise/mfa) | MFA in Vault Enterprise provides MFA for login and for step-up access to sensitive resources in Vault. Supports CLI/API based login, and ACL/Sentinel policies. | MFA is managed by Vault | Supported in Vault Enterprise |
~> **Note**: [The Legacy MFA](/vault/docs/v1.10.x/auth/mfa) is a **deprecated** MFA workflow in Vault Community Edition. Refer [here](#q-what-is-the-legacy-mfa-feature) for more details.
### Q: what is the legacy MFA feature?
[Legacy MFA](/vault/docs/v1.10.x/auth/mfa) is functionality that was available in Vault Community Edition, prior to introducing MFA in the Enterprise version. This is now a deprecated feature. Please see the [Vault Feature Deprecation Notice and Plans](/vault/docs/deprecation) for detailed product plans around deprecated features. We plan to remove Legacy MFA in 1.11.
### Q: will HCP Vault Dedicated support MFA?
Yes, HCP Vault Dedicated will support MFA across all tiers and offering as part of the April 2022 release.
### Q: what is Single-Phase MFA vs. Two-Phase MFA?
- **Single-Phase MFA:** This is a single request mechanism where the required MFA information, such as MFA method ID, is provided via the X-Vault-MFA header in a single MFA request that is used to authenticate into Vault.
~> **Note**: If the configured MFA methods need a passcode, it needs to be provided in the request, such as in the case of TOTP or Duo.
If the configured MFA methods, such as PingID, Okta, or Duo, do not require a passcode and have out of band mechanisms for verifying the extra factor, Vault will send an inquiry to the other service's APIs to determine whether the MFA request has yet been verified.
- **Two-Phase MFA:** This is a two-request MFA method that is more conventionally used.
- The MFA passcode required for the configured MFA method is not provided in a header of the login request that is MFA-restricted. Instead, the user first authenticates to the auth method, and on successful authentication to the auth method, an MFA requirement is returned to the user. The MFA requirement contains the MFA RequestID and constraints applicable to the MFA as configured by the operator.
- The user then must make a second request to the new endpoint `sys/mfa/validate`, providing the MFA RequestID in the request, and an MFA payload which includes the MFA methodIDs passcode (if applicable). If MFA validation passes, the new Vault token will be persisted and returned to the user in the response, just like a regular Vault token created using a non-MFA-restricted auth method.
### Q: are there new MFA API endpoints introduced as part of the new Vault version 1.10 MFA for login functionality?
Yes, this feature adds the following new MFA configuration endpoints: `identity/mfa/method`, `identity/mfa/login-enforcement`, and `sys/mfa/validate`. Refer to the [documentation](/vault/api-docs/secret/identity/mfa/duo) for more details.
### Q: how do MFA configurations differ between the login MFA and step-up enterprise MFA?
All MFA methods supported with the Step-up Enterprise MFA are supported with the Login MFA, but they use different API endpoints:
- Step-up Enterprise MFA: `sys/mfa/method/:type/:/name`
- Login MFA: `identity/mfa/method/:type`
There are also two differences in how methods are defined in the two systems.
The Step-up Enterprise MFA expects the method creator to specify a name for the method; Login MFA does not, and instead returns an ID when a method is created.
The Step-up Enterprise MFA uses the combination of mount accessors plus a `username_format` template string, whereas in Login MFA, these are combined into a single field `username_format`, which uses the same identity [templating format](/vault/docs/concepts/policies#templated-policies) as used in policies.
### Q: what are the ways to configure the various MFA workflows?
| MFA workflow | Configuration methods | Details |
| ---------------------------------------------- | ----------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Login MFA](/vault/docs/auth/login-mfa) | CLI/API. The UI does not support the configuration of Login MFA as of Vault version 1.10. | Configured using the `identity/mfa/method` endpoints, then passing those method IDs to the `identity/mfa/login-enforcement` endpoint. MFA methods supported: TOTP, Okta, Duo, PingID. |
| [Okta Auth MFA](/vault/docs/auth/okta) | CLI/API | MFA methods supported: [TOTP](https://help.okta.com/en/prod/Content/Topics/Security/mfa-totp-seed.htm) , [Okta Verify Push](https://help.okta.com/en/prod/Content/Topics/Mobile/ov-admin-config.htm). |
| [Step-up Enterprise MFA](/vault/docs/enterprise/mfa) | CLI/API | [Configured](/vault/api-docs/system/mfa) using the `sys/mfa/method` endpoints and by referencing those methods in policies. MFA Methods supported: TOTP, Okta, Duo, PingID |
### Q: which MFA mechanism is used with the different MFA workflows in Vault version 1.10?
| MFA workflow | UI | CLI/API | Single-Phase | Two-Phase |
| ---------------------------------------------- | --------- | ----------------------------------------------------------------------------------------------------------------------------------------- | --------------------------- | --------------------------- |
| [Login MFA](/vault/docs/auth/login-mfa) | Supported | Supported. You can select single-phase MFA by supplying the X-Vault-MFA header. In the absence of this header, the Two- Phase MFA is used | N/A | Supported |
| [Okta Auth MFA](/vault/docs/auth/okta) | N/A | N/A | MFA is not managed by Vault | MFA is not managed by Vault |
| [Step-up Enterprise MFA](/vault/docs/enterprise/mfa) | N/A | Supported | Supported | N/A |
### Q: are namespaces supported with the MFA workflows that Vault has as of Vault version 1.10?
The Step-up Enterprise MFA configurations can only be configured in the root [namespace](/vault/docs/enterprise/mfa#namespaces), although they can be referenced in other namespaces via the policies.
The Login MFA supports namespaces awareness. Users will need a Vault Enterprise license to user or configure Login MFA with namespaces. MFA method configurations can be defined per namespace with Login MFA, and used in enforcements defined in that namespace and its children. Everything operates in the root namespace in Vault Community Edition. MFA login enforcements can also be defined per namespace, and applied to that namespace and its children.
### Q: i use the Vault agent. does MFA pose any challenges for me?
The Vault Agent should not use MFA to authenticate to Vault; it should be able to relay requests with MFA-related headers to Vault successfully.
### Q: i am a step-up enterprise MFA user using MFA for login. should i migrate to the new login MFA?
If you are currently using Enterprise MFA, evaluate your MFA specific use cases to determine whether or not you should migrate to [Login MFA](/vault/docs/auth/login-mfa).
Here are some considerations:
- If you use the Step-up Enterprise MFA for login (with Sentinel EGP), you may find value in the simpler Login MFA workflow. We recommend that you to test this out to evaluate if this meets all your requirements.
- If you use the Step-up Enterprise MFA for more than login, please be aware that the new MFA workflow only supports the login use case. You will still need to use the Step-up Enterprise MFA for non-login use cases.
### Q: i am a step-up enterprise MFA user using MFA for login. what are the steps to migrate to login MFA?
Refer to the question [Q: I am a Step-up Enterprise MFA user using MFA for login. Should I migrate to the new Login MFA?](#q-i-am-a-step-up-enterprise-mfa-user-using-mfa-for-login-should-i-migrate-to-the-new-login-mfa) to evaluate whether or not you should migrate.
If you wish to migrate to Login MFA, follow these steps and guidelines to migrate successfully.
1. First, create new MFA methods using the `identity/mfa/method` endpoints. These should mostly use the same fields as the MFA methods you defined using the `sys/mfa` method while keeping the following in mind:
-the new endpoints yield an ID instead of allowing you to define a name
-the new non-TOTP endpoints have a username_format field instead of username_format+mount_accessor fields; see [Templated Policies](/vault/docs/concepts/policies#templated-policies) for the username_format format.
1. Instead of writing sentinel EGP rules to require that logins use MFA, use the `identity/mfa/login_enforcement` endpoint to specify the MFA methods. | vault | layout docs page title Login MFA FAQ description Commonly questions about Vault login MFA and multi factor authentication Login MFA FAQ This FAQ section contains frequently asked questions about the Login MFA feature Q What MFA features can I access if I upgrade to Vault version 1 10 q what mfa features can i access if i upgrade to vault version 1 10 Q What are the various MFA workflows that are available to me as a Vault user as of Vault version 1 10 and how are they different q what are the various mfa workflows that are available to me as a vault user as of vault version 1 10 and how are they different Q What is the Legacy MFA feature q what is the legacy mfa feature Q Will HCP Vault Dedicated support MFA q will hcp vault support mfa Q What is Single Phase MFA vs Two Phase MFA q what is single phase mfa vs two phase mfa Q Are there new MFA API endpoints introduced as part of the new Vault version 1 10 MFA for login functionality q are there new mfa api endpoints introduced as part of the new vault version 1 10 mfa for login functionality Q How do MFA configurations differ between the Login MFA and Step up Enterprise MFA q how do mfa configurations differ between the login mfa and step up enterprise mfa Q What are the ways to configure the various MFA workflows q what are the ways to configure the various mfa workflows Q What MFA mechanism is used with the different MFA workflows in Vault version 1 10 q which mfa mechanism is used with the different mfa workflows in vault version 1 10 Q Are namespaces supported with the MFA workflows that Vault has as of Vault version 1 10 q are namespaces supported with the mfa workflows that vault has as of vault version 1 10 Q I use the Vault Agent Does MFA pose any challenges for me q i use the vault agent does mfa pose any challenges for me Q I am a Step up Enterprise MFA user using MFA for login Should I migrate to the new Login MFA q i am a step up enterprise mfa user using mfa for login should i migrate to the new login mfa Q I am a Step up Enterprise MFA user using MFA for login What are the steps to migrate to Login MFA q i am a step up enterprise mfa user using mfa for login what are the steps to migrate to login mfa Q what MFA features can i access if i upgrade to Vault version 1 10 Vault supports Step up Enterprise MFA as part of our Enterprise edition The Step up Enterprise MFA provides MFA on login or for step up access to sensitive resources in Vault using ACL and Sentinel policies and is configurable through the CLI API Starting with Vault version 1 10 Vault Community Edition provides MFA on login vault docs auth login mfa only This is also available with Vault Enterprise and configurable through the CLI API The Step up Enterprise MFA will co exist with the newly introduced Login MFA starting with Vault version 1 10 Q what are the various MFA workflows that are available to me as a Vault user as of Vault version 1 10 and how are they different MFA workflow What does it do Who manages the MFA Community vs Enterprise Support Login MFA vault docs auth login mfa MFA in Vault Community Edition provides MFA on login CLI API and UI based login are supported MFA is managed by Vault Supported in Vault Community Edition Okta Auth MFA vault docs auth okta mfa This is MFA as part of Okta Auth method vault docs auth okta in Vault Community Edition where MFA is enforced by Okta on login MFA must be satisfied for authentication to be successful This is different from the Okta MFA method used with Login MFA and Step up Enterprise MFA CLI API login are supported MFA is managed externally by Okta Supported in Vault Community Edition Step up Enterprise MFA vault docs enterprise mfa MFA in Vault Enterprise provides MFA for login and for step up access to sensitive resources in Vault Supports CLI API based login and ACL Sentinel policies MFA is managed by Vault Supported in Vault Enterprise Note The Legacy MFA vault docs v1 10 x auth mfa is a deprecated MFA workflow in Vault Community Edition Refer here q what is the legacy mfa feature for more details Q what is the legacy MFA feature Legacy MFA vault docs v1 10 x auth mfa is functionality that was available in Vault Community Edition prior to introducing MFA in the Enterprise version This is now a deprecated feature Please see the Vault Feature Deprecation Notice and Plans vault docs deprecation for detailed product plans around deprecated features We plan to remove Legacy MFA in 1 11 Q will HCP Vault Dedicated support MFA Yes HCP Vault Dedicated will support MFA across all tiers and offering as part of the April 2022 release Q what is Single Phase MFA vs Two Phase MFA Single Phase MFA This is a single request mechanism where the required MFA information such as MFA method ID is provided via the X Vault MFA header in a single MFA request that is used to authenticate into Vault Note If the configured MFA methods need a passcode it needs to be provided in the request such as in the case of TOTP or Duo If the configured MFA methods such as PingID Okta or Duo do not require a passcode and have out of band mechanisms for verifying the extra factor Vault will send an inquiry to the other service s APIs to determine whether the MFA request has yet been verified Two Phase MFA This is a two request MFA method that is more conventionally used The MFA passcode required for the configured MFA method is not provided in a header of the login request that is MFA restricted Instead the user first authenticates to the auth method and on successful authentication to the auth method an MFA requirement is returned to the user The MFA requirement contains the MFA RequestID and constraints applicable to the MFA as configured by the operator The user then must make a second request to the new endpoint sys mfa validate providing the MFA RequestID in the request and an MFA payload which includes the MFA methodIDs passcode if applicable If MFA validation passes the new Vault token will be persisted and returned to the user in the response just like a regular Vault token created using a non MFA restricted auth method Q are there new MFA API endpoints introduced as part of the new Vault version 1 10 MFA for login functionality Yes this feature adds the following new MFA configuration endpoints identity mfa method identity mfa login enforcement and sys mfa validate Refer to the documentation vault api docs secret identity mfa duo for more details Q how do MFA configurations differ between the login MFA and step up enterprise MFA All MFA methods supported with the Step up Enterprise MFA are supported with the Login MFA but they use different API endpoints Step up Enterprise MFA sys mfa method type name Login MFA identity mfa method type There are also two differences in how methods are defined in the two systems The Step up Enterprise MFA expects the method creator to specify a name for the method Login MFA does not and instead returns an ID when a method is created The Step up Enterprise MFA uses the combination of mount accessors plus a username format template string whereas in Login MFA these are combined into a single field username format which uses the same identity templating format vault docs concepts policies templated policies as used in policies Q what are the ways to configure the various MFA workflows MFA workflow Configuration methods Details Login MFA vault docs auth login mfa CLI API The UI does not support the configuration of Login MFA as of Vault version 1 10 Configured using the identity mfa method endpoints then passing those method IDs to the identity mfa login enforcement endpoint MFA methods supported TOTP Okta Duo PingID Okta Auth MFA vault docs auth okta CLI API MFA methods supported TOTP https help okta com en prod Content Topics Security mfa totp seed htm Okta Verify Push https help okta com en prod Content Topics Mobile ov admin config htm Step up Enterprise MFA vault docs enterprise mfa CLI API Configured vault api docs system mfa using the sys mfa method endpoints and by referencing those methods in policies MFA Methods supported TOTP Okta Duo PingID Q which MFA mechanism is used with the different MFA workflows in Vault version 1 10 MFA workflow UI CLI API Single Phase Two Phase Login MFA vault docs auth login mfa Supported Supported You can select single phase MFA by supplying the X Vault MFA header In the absence of this header the Two Phase MFA is used N A Supported Okta Auth MFA vault docs auth okta N A N A MFA is not managed by Vault MFA is not managed by Vault Step up Enterprise MFA vault docs enterprise mfa N A Supported Supported N A Q are namespaces supported with the MFA workflows that Vault has as of Vault version 1 10 The Step up Enterprise MFA configurations can only be configured in the root namespace vault docs enterprise mfa namespaces although they can be referenced in other namespaces via the policies The Login MFA supports namespaces awareness Users will need a Vault Enterprise license to user or configure Login MFA with namespaces MFA method configurations can be defined per namespace with Login MFA and used in enforcements defined in that namespace and its children Everything operates in the root namespace in Vault Community Edition MFA login enforcements can also be defined per namespace and applied to that namespace and its children Q i use the Vault agent does MFA pose any challenges for me The Vault Agent should not use MFA to authenticate to Vault it should be able to relay requests with MFA related headers to Vault successfully Q i am a step up enterprise MFA user using MFA for login should i migrate to the new login MFA If you are currently using Enterprise MFA evaluate your MFA specific use cases to determine whether or not you should migrate to Login MFA vault docs auth login mfa Here are some considerations If you use the Step up Enterprise MFA for login with Sentinel EGP you may find value in the simpler Login MFA workflow We recommend that you to test this out to evaluate if this meets all your requirements If you use the Step up Enterprise MFA for more than login please be aware that the new MFA workflow only supports the login use case You will still need to use the Step up Enterprise MFA for non login use cases Q i am a step up enterprise MFA user using MFA for login what are the steps to migrate to login MFA Refer to the question Q I am a Step up Enterprise MFA user using MFA for login Should I migrate to the new Login MFA q i am a step up enterprise mfa user using mfa for login should i migrate to the new login mfa to evaluate whether or not you should migrate If you wish to migrate to Login MFA follow these steps and guidelines to migrate successfully 1 First create new MFA methods using the identity mfa method endpoints These should mostly use the same fields as the MFA methods you defined using the sys mfa method while keeping the following in mind the new endpoints yield an ID instead of allowing you to define a name the new non TOTP endpoints have a username format field instead of username format mount accessor fields see Templated Policies vault docs concepts policies templated policies for the username format format 1 Instead of writing sentinel EGP rules to require that logins use MFA use the identity mfa login enforcement endpoint to specify the MFA methods |
vault Use Active Directory Federation Services for SAML layout docs include alerts enterprise and hcp mdx Use Active Directory Federation Services AD FS as a SAML provider for Vault page title Use Active Directory Federation Services for SAML | ---
layout: docs
page_title: Use Active Directory Federation Services for SAML
description: >-
Use Active Directory Federation Services (AD FS) as a SAML provider for Vault.
---
# Use Active Directory Federation Services for SAML
@include 'alerts/enterprise-and-hcp.mdx'
Configure your Vault instance to work with Active Directory Federation Services
(AD FS) and use AD FS accounts for SAML authentication.
## Before you start
- **You must have Vault Enterprise or HCP Vault v1.15.5+**.
- **You must be running AD FS on Windows Server**.
- **You must have a [SAML plugin](/vault/docs/auth/saml) enabled**.
- **You must have a Vault admin token**. If you do not have a valid admin
token, you can generate a new token in the Vault GUI or using
[`vault token create`](/vault/docs/commands/token/create) with the Vault CLI.
## Step 1: Enable the SAML authN method for Vault
<Tabs>
<Tab heading="Vault CLI" group="cli">
1. Set the `VAULT_ADDR` environment variable to your Vault instance URL. For
example:
```shell-session
$ export VAULT_ADDR="https://myvault.example.com:8200"
```
1. Set the `VAULT_TOKEN` environment variable with your admin token:
```shell-session
$ export VAULT_TOKEN="XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
```
1. Enable the SAML plugin. Use the `-namespace` flag to enable the plugin under
a specific namespace. For example:
```shell-session
$ vault -namespace=ns_admin auth enable saml
```
</Tab>
<Tab heading="Vault GUI" group="gui">
@include 'gui-instructions/enable-authn-plugin.mdx'
- Enable the SAML plugin:
1. Select the **SAML** token.
1. Set the mount path.
1. Click **Enable Method**.
</Tab>
</Tabs>
## Step 2: Create a new relying party trust in AD
1. Open your Windows Server UI.
1. Go to the **Server Manager** screen.
1. Click **Tools** and select **AD FS Management**.
1. Right-click **Relying Party Trusts** and select **Add Relying Party Trust...**.
1. Follow the prompts to create a new party trust with the following settings:
| Option | Setting
| ----------------------------------------------------- | -------
| Claims aware | checked
| Enter data about relying party manually | checked
| Display name | "Vault"
| Certificates | None
| Enable support for the SAML 2.0 WebSSO protocol | checked
| SAML callback URL | Callback endpoint for your SAML plugin
| Relying party trust identifier | Any meaningful, unique string. For example "VaultIdentifier"
| Access control policy | Any valid policy or `Permit everyone`
| Configure claims issuance policy for this application | checked
<Tip>
The callback endpoint for your SAML plugin is:
`https://${VAULT_ADDRESS}/v1/<NAMESPACE>/<MOUNT_PATH>/auth/<PLUGIN_NAME>/callback`
For example, if you mounted the plugin under the `ns_admin` namespace on the
path `org/security`, the callback endpoint URL would be:
`https://${VAULT_ADDRESS}/v1/ns_admin/auth/org/security/saml/callback`
</Tip>
## Step 3: Configure the claim issuance policy in AD
1. Open your Windows Server UI.
1. Go to the **Server Manager** screen.
1. Click **Tools** and select **AD FS Management**.
1. Right-click your new **Relying Party Trust** entry and select
**Edit Claim Issuance Policy...**.
1. Click **Add Rule...** and follow the prompts to create a new **Transform
Claim Rule** with the following settings:
| Option | Setting
| ------------------------------- | -------
| Send LDAP Attributes as Claims | selected
| Rule name | Any meaningful string (e.g., "Vault SAML Claims")
| Attribute store | `Active Directory`.
1. Complete the LDAP attribute array with the following settings:
| LDAP attribute | Outgoing claim type |
|------------------------------------|-------------------------------|
| `E-Mail-Addresses` | `Name ID` |
| `E-Mail-Addresses` | `E-Mail Address` |
| `Token-Groups - Unqualified Names` | `groups` or `Group` |
## Step 4: Update the SAML signature in AD
1. Open a PowerShell terminal on your Windows server.
1. Set the SAML signature for your relying party trust identifier to `false`:
```powershell
Set-ADFSRelyingPartyTrust `
-TargetName "<RELYING_PARTY_TRUST_IDENTIFIER>" `
-SignedSamlRequestsRequired $false
```
For example:
<CodeBlockConfig hideClipboard>
```powershell
Set-ADFSRelyingPartyTrust `
-TargetName "MyVaultIdentifier" `
-SignedSamlRequestsRequired $false
```
</CodeBlockConfig>
## Step 5: Create a default AD FS role in Vault
Use the Vault CLI to create a default role for users authenticating
with AD FS where:
- `SAML_PLUGIN_PATH` is the full path (`<NAMESPACE>/MOUNT_PATH/NAME`) to your
SAML plugin.
- `VAULT_ROLE` is the name of your new AD FS role. For example, `adfs-default`.
- `DOMAIN_LIST` is a comma separated list of target domains in Active Directory.
For example: `*@example.com,*@ext.example.com`.
- `GROUP_ATTRIBUTES_REF` is:
- `groups` if your LDAP token group is `groups`
- `http://schemas.xmlsoap.org/claims/Group` if your LDAP token group is
`Group`
- `AD_GROUP_LIST` is a comma separated list of Active Directory groups that
will authenticate with SAML. For example: `VaultAdmin,VaultUser`.
```shell-session
$ vault write <SAML_PLUGIN_PATH>/role/<VAULT_ROLE> \
bound_subjects="<DOMAIN_LIST>" \
bound_subjects_type="glob" \
groups_attribute=<GROUP_ATTRIBUTES_REF> \
bound_attributes=groups="<AD_GROUP_LIST>" \
token_policies="default" \
ttl="1h"
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault write auth/saml/role/adfs-default \
bound_subjects="*@example.com,*@ext.example.com" \
bound_subjects_type="glob" \
groups_attribute=groups \
bound_attributes=groups="VaultAdmin,VaultUser" \
token_policies="default" \
ttl="1h"
```
</CodeBlockConfig>
## Step 6: Configure the SAML plugin in Vault
Use the Vault CLI to finish configuring the SAML plugin where:
- `SAML_PLUGIN_PATH` is the full path to your SAML plugin:
`<NAMESPACE>/auth/<MOUNT_PATH>/<PLUGIN_NAME>`.
- `VAULT_ROLE` is the name of your new AD FS role in Vault.
- `TRUST_IDENTIFIER` is the ID of your new relying party trust in AD FS.
- `SAML_CALLBACK_URL` is the callback endpoint for your SAML plugin:
`http://${VAULT_ADDR}/<NAMESPACE>/auth/<MOUNT_PATH>/<PLUGIN_NAME>/callback`.
- `ADFS_URL` is the discovery URL for your AD FS instance.
- `METADATA_FILE_PATH` is the path on your AD FS instance to the federation
metadata file.
```shell-session
$ vault write <SAML_PLUGIN_PATH>/config \
default_role="<VAULT_ROLE>" \
entity_id="<TRUST_IDENTIFIER>" \
acs_urls="<SAML_CALLBACK_URL> \
idp_metadata_url="<AD FS_URL>/<METADATA_FILE_PATH>"
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault write ns_admin/auth/org/security/saml/config \
default_role="adfs-default" \
entity_id="MyVaultIdentifier" \
acs_urls="${VAULT_ADDR}/v1/ns_admin/auth/org/security/saml/callback" \
idp_metadata_url="https://adfs.example.com/metadata/2007-06/federationmetadata.xml"
```
</CodeBlockConfig>
## Next steps
- [Link your Active Directory groups to Vault](/vault/docs/auth/saml/link-vault-group-to-ad)
- [Troubleshoot your SAML + AD FS configuration](/vault/docs/auth/saml/troubleshoot-adfs) | vault | layout docs page title Use Active Directory Federation Services for SAML description Use Active Directory Federation Services AD FS as a SAML provider for Vault Use Active Directory Federation Services for SAML include alerts enterprise and hcp mdx Configure your Vault instance to work with Active Directory Federation Services AD FS and use AD FS accounts for SAML authentication Before you start You must have Vault Enterprise or HCP Vault v1 15 5 You must be running AD FS on Windows Server You must have a SAML plugin vault docs auth saml enabled You must have a Vault admin token If you do not have a valid admin token you can generate a new token in the Vault GUI or using vault token create vault docs commands token create with the Vault CLI Step 1 Enable the SAML authN method for Vault Tabs Tab heading Vault CLI group cli 1 Set the VAULT ADDR environment variable to your Vault instance URL For example shell session export VAULT ADDR https myvault example com 8200 1 Set the VAULT TOKEN environment variable with your admin token shell session export VAULT TOKEN XXXXXXXX XXXX XXXX XXXX XXXXXXXXXXXX 1 Enable the SAML plugin Use the namespace flag to enable the plugin under a specific namespace For example shell session vault namespace ns admin auth enable saml Tab Tab heading Vault GUI group gui include gui instructions enable authn plugin mdx Enable the SAML plugin 1 Select the SAML token 1 Set the mount path 1 Click Enable Method Tab Tabs Step 2 Create a new relying party trust in AD 1 Open your Windows Server UI 1 Go to the Server Manager screen 1 Click Tools and select AD FS Management 1 Right click Relying Party Trusts and select Add Relying Party Trust 1 Follow the prompts to create a new party trust with the following settings Option Setting Claims aware checked Enter data about relying party manually checked Display name Vault Certificates None Enable support for the SAML 2 0 WebSSO protocol checked SAML callback URL Callback endpoint for your SAML plugin Relying party trust identifier Any meaningful unique string For example VaultIdentifier Access control policy Any valid policy or Permit everyone Configure claims issuance policy for this application checked Tip The callback endpoint for your SAML plugin is https VAULT ADDRESS v1 NAMESPACE MOUNT PATH auth PLUGIN NAME callback For example if you mounted the plugin under the ns admin namespace on the path org security the callback endpoint URL would be https VAULT ADDRESS v1 ns admin auth org security saml callback Tip Step 3 Configure the claim issuance policy in AD 1 Open your Windows Server UI 1 Go to the Server Manager screen 1 Click Tools and select AD FS Management 1 Right click your new Relying Party Trust entry and select Edit Claim Issuance Policy 1 Click Add Rule and follow the prompts to create a new Transform Claim Rule with the following settings Option Setting Send LDAP Attributes as Claims selected Rule name Any meaningful string e g Vault SAML Claims Attribute store Active Directory 1 Complete the LDAP attribute array with the following settings LDAP attribute Outgoing claim type E Mail Addresses Name ID E Mail Addresses E Mail Address Token Groups Unqualified Names groups or Group Step 4 Update the SAML signature in AD 1 Open a PowerShell terminal on your Windows server 1 Set the SAML signature for your relying party trust identifier to false powershell Set ADFSRelyingPartyTrust TargetName RELYING PARTY TRUST IDENTIFIER SignedSamlRequestsRequired false For example CodeBlockConfig hideClipboard powershell Set ADFSRelyingPartyTrust TargetName MyVaultIdentifier SignedSamlRequestsRequired false CodeBlockConfig Step 5 Create a default AD FS role in Vault Use the Vault CLI to create a default role for users authenticating with AD FS where SAML PLUGIN PATH is the full path NAMESPACE MOUNT PATH NAME to your SAML plugin VAULT ROLE is the name of your new AD FS role For example adfs default DOMAIN LIST is a comma separated list of target domains in Active Directory For example example com ext example com GROUP ATTRIBUTES REF is groups if your LDAP token group is groups http schemas xmlsoap org claims Group if your LDAP token group is Group AD GROUP LIST is a comma separated list of Active Directory groups that will authenticate with SAML For example VaultAdmin VaultUser shell session vault write SAML PLUGIN PATH role VAULT ROLE bound subjects DOMAIN LIST bound subjects type glob groups attribute GROUP ATTRIBUTES REF bound attributes groups AD GROUP LIST token policies default ttl 1h For example CodeBlockConfig hideClipboard shell session vault write auth saml role adfs default bound subjects example com ext example com bound subjects type glob groups attribute groups bound attributes groups VaultAdmin VaultUser token policies default ttl 1h CodeBlockConfig Step 6 Configure the SAML plugin in Vault Use the Vault CLI to finish configuring the SAML plugin where SAML PLUGIN PATH is the full path to your SAML plugin NAMESPACE auth MOUNT PATH PLUGIN NAME VAULT ROLE is the name of your new AD FS role in Vault TRUST IDENTIFIER is the ID of your new relying party trust in AD FS SAML CALLBACK URL is the callback endpoint for your SAML plugin http VAULT ADDR NAMESPACE auth MOUNT PATH PLUGIN NAME callback ADFS URL is the discovery URL for your AD FS instance METADATA FILE PATH is the path on your AD FS instance to the federation metadata file shell session vault write SAML PLUGIN PATH config default role VAULT ROLE entity id TRUST IDENTIFIER acs urls SAML CALLBACK URL idp metadata url AD FS URL METADATA FILE PATH For example CodeBlockConfig hideClipboard shell session vault write ns admin auth org security saml config default role adfs default entity id MyVaultIdentifier acs urls VAULT ADDR v1 ns admin auth org security saml callback idp metadata url https adfs example com metadata 2007 06 federationmetadata xml CodeBlockConfig Next steps Link your Active Directory groups to Vault vault docs auth saml link vault group to ad Troubleshoot your SAML AD FS configuration vault docs auth saml troubleshoot adfs |
vault page title Set up SAML authN Use SAML authentication with Vault to authenticate Vault users with public keys or certificates and a SAML identity provider layout docs Set up SAML authentication | ---
layout: docs
page_title: Set up SAML authN
description: >-
Use SAML authentication with Vault to authenticate Vault users with public
keys or certificates and a SAML identity provider.
---
# Set up SAML authentication
@include 'alerts/enterprise-and-hcp.mdx'
The `saml` auth method allows users to authentication with Vault using their identity
within a [SAML V2.0](https://saml.xml.org/saml-specifications) identity provider.
Authentication is suited for human users by requiring interaction with a web browser.
## Authentication
<Tabs>
<Tab heading="Vault CLI">
The CLI login defaults to the `/saml` path. If this auth method was enabled at a
different path, specify `-path=/my-path` in the CLI.
```shell-session
$ vault login -method=saml role=admin
Complete the login via your SAML provider. Launching browser to:
https://company.okta.com/app/vault/abc123eb9xnIfzlaf697/sso/saml?SAMLRequest=fJI9b9swEIZ3%2FwqBu0SJ%2FpBDRAZce4iBtDViN0MX40Sda...
```
The CLI opens the default browser to the generated URL where users must authenticate
with the configured SAML identity provider. The URL may be manually entered into the
browser if it cannot be automatically opened.
The CLI login behavior may be customized with the following optional parameters:
- `skip_browser` (default: `false`): If set to `true`, automatic launching of the default
browser will be skipped. The SAML identity provider URL must be manually entered in a
browser to complete the authentication flow.
- `abort_on_error` (default: `false`): If set to `true`, the CLI returns an error and
exits with a non-zero value if it cannot launch the default browser.
</Tab>
<Tab heading="Vault UI">
1. Select "SAML" from the "Method" select box.
1. Enter a role name for the "Role" field or leave blank to use
the [default role](/vault/api-docs/auth/saml#default_role).
1. Press **Sign in with SAML Provider** and complete the authentication with the
configured SAML identity provider.
</Tab>
</Tabs>
## Configuration
Auth methods must be configured in advance before users or machines can
authenticate. These steps are usually completed by an operator or configuration
management tool.
1. Enable the SAML authentication method with the `auth enable` CLI command:
```shell-session
$ vault auth enable saml
```
1. Use the `/config` endpoint to save the configuration of your SAML identity provider and
set the default role. You can configure the trust relationship with the SAML Identity
Provider by either providing a URL for its Metadata document:
```shell-session
$ vault write auth/saml/config \
default_role="admin" \
idp_metadata_url="https://company.okta.com/app/abc123eb9xnIfzlaf697/sso/saml/metadata" \
entity_id="https://my.vault/v1/auth/saml" \
acs_urls="https://my.vault/v1/auth/saml/callback"
```
or by setting the configuration Metadata manually:
```shell-session
$ vault write auth/saml/config \
default_role="admin" \
idp_sso_url="https://company.okta.com/app/abc123eb9xnIfzlaf697/sso/saml" \
idp_entity_id="https://www.okta.com/abc123eb9xnIfzlaf697" \
idp_cert="@path/to/cert.pem" \
entity_id="https://my.vault/v1/auth/saml" \
acs_urls="https://my.vault/v1/auth/saml/callback"
```
1. Create a named role:
```shell-session
$ vault write auth/saml/role/admin \
bound_subjects="*@hashicorp.com" \
bound_subjects_type="glob" \
token_policies="writer" \
bound_attributes=group="admin" \
ttl="1h"
```
This role authorizes users that have a subject with an `@hashicorp.com` suffix and
are in the `admin` group to authenticate. It also gives the resulting Vault token a
time-to-live of 1 hour and the `writer` policy.
Refer to the SAML [API documentation](/vault/api-docs/auth/saml) for a
complete list of configuration options.
### Assertion consumer service URLs
The [`acs_urls`](/vault/api-docs/auth/saml#acs_urls) configuration parameter determines
where the SAML response will be sent after users authenticate with the configured SAML
identity provider in their browser.
The values provided to Vault must:
- Match or be a subset of the configured values for the SAML application within the
configured identity provider.
- Be directed to the auth method's [assertion consumer service
callback](/vault/api-docs/auth/saml#assertion-consumer-service-callback) API.
<Note>
It is highly recommended and enforced by some identity providers to TLS-protect the
assertion consumer service URLs. A warning will be returned from Vault if any of the
configured assertion consumer service URLs are not protected by TLS.
</Note>
#### Configuration for replication
To support a single auth method mount being used across Vault [replication](/vault/docs/enterprise/replication)
clusters, `acs_urls` supports configuration of multiple values. For example, to support
SAML authentication on a primary and secondary Vault cluster, the following `acs_urls`
configuration could be given:
```shell-session
$ vault write auth/saml/config \
acs_urls="https://primary.vault/v1/auth/saml/callback,https://secondary.vault/v1/auth/saml/callback"
```
The Vault UI and CLI will automatically request the proper assertion consumer service URL
for the cluster they're configured to communicate with. This means that the entirety of the
authentication flow will stay within the targeted cluster.
#### Configuration for namespaces
The SAML auth method can be used within Vault [namespaces](/vault/docs/enterprise/namespaces).
The assertion consumer service URLs configured in both Vault and the identity provider must
include the namespace path segment.
The following table provides assertion consumer service URLs given different namespace paths:
| Namespace path | Assertion consumer service URL |
|-----------------|-------------------------------------------------------|
| `admin/` | `https://my.vault/v1/admin/auth/saml/callback` |
| `org/security/` | `https://my.vault/v1/org/security/auth/saml/callback` |
### Bound attributes
Once the user has been authenticated the authorization flow will validate
that both the [`bound_subjects`](/vault/api-docs/auth/saml#bound_subjects) and
[`bound_attributes`](/vault/api-docs/auth/saml#bound_attributes) match expected
values configured for the role. This can be used to restrict access to Vault for
a subset of users in the SAML identity provider.
For example, a role with `bound_subjects=*@hashicorp.com` and
`bound_attributes=groups=support,engineering` will only authorize users whose subject has
an `@hashicorp.com` suffix and that are in either the `support` or `engineering` group.
Whether it should be an exact match or interpret `*` as a wildcard can be
controlled by the [`bound_subjects_type`](/vault/api-docs/auth/saml#bound_subjects_type) and
[`bound_attributes_type`](/vault/api-docs/auth/saml#bound_attributes_type) parameters.
### Bound attributes for the Microsoft identity platform
The bound attributes for the Microsoft identity platform requires
`http://schemas.microsoft.com/ws/2008/06/identity/claims/groups` as the
attribute name along with your group membership values. For example, a role with
`bound_attributes=http://schemas.microsoft.com/ws/2008/06/identity/claims/groups="GROUP1_OBJECT_ID,GROUP2_OBJECT_ID"`
will only authorize users that are in either the `GROUP1_OBJECT_ID` or
`GROUP2_OBJECT_ID` group.
You can read more at the Microsoft identity platform's
[SAML token claims reference](https://learn.microsoft.com/en-us/entra/identity-platform/reference-saml-tokens).
## API
The SAML authentication plugin has a full HTTP API. Refer to the
[SAML API documentation](/vault/api-docs/auth/saml) for more details | vault | layout docs page title Set up SAML authN description Use SAML authentication with Vault to authenticate Vault users with public keys or certificates and a SAML identity provider Set up SAML authentication include alerts enterprise and hcp mdx The saml auth method allows users to authentication with Vault using their identity within a SAML V2 0 https saml xml org saml specifications identity provider Authentication is suited for human users by requiring interaction with a web browser Authentication Tabs Tab heading Vault CLI The CLI login defaults to the saml path If this auth method was enabled at a different path specify path my path in the CLI shell session vault login method saml role admin Complete the login via your SAML provider Launching browser to https company okta com app vault abc123eb9xnIfzlaf697 sso saml SAMLRequest fJI9b9swEIZ3 2FwqBu0SJ 2FpBDRAZce4iBtDViN0MX40Sda The CLI opens the default browser to the generated URL where users must authenticate with the configured SAML identity provider The URL may be manually entered into the browser if it cannot be automatically opened The CLI login behavior may be customized with the following optional parameters skip browser default false If set to true automatic launching of the default browser will be skipped The SAML identity provider URL must be manually entered in a browser to complete the authentication flow abort on error default false If set to true the CLI returns an error and exits with a non zero value if it cannot launch the default browser Tab Tab heading Vault UI 1 Select SAML from the Method select box 1 Enter a role name for the Role field or leave blank to use the default role vault api docs auth saml default role 1 Press Sign in with SAML Provider and complete the authentication with the configured SAML identity provider Tab Tabs Configuration Auth methods must be configured in advance before users or machines can authenticate These steps are usually completed by an operator or configuration management tool 1 Enable the SAML authentication method with the auth enable CLI command shell session vault auth enable saml 1 Use the config endpoint to save the configuration of your SAML identity provider and set the default role You can configure the trust relationship with the SAML Identity Provider by either providing a URL for its Metadata document shell session vault write auth saml config default role admin idp metadata url https company okta com app abc123eb9xnIfzlaf697 sso saml metadata entity id https my vault v1 auth saml acs urls https my vault v1 auth saml callback or by setting the configuration Metadata manually shell session vault write auth saml config default role admin idp sso url https company okta com app abc123eb9xnIfzlaf697 sso saml idp entity id https www okta com abc123eb9xnIfzlaf697 idp cert path to cert pem entity id https my vault v1 auth saml acs urls https my vault v1 auth saml callback 1 Create a named role shell session vault write auth saml role admin bound subjects hashicorp com bound subjects type glob token policies writer bound attributes group admin ttl 1h This role authorizes users that have a subject with an hashicorp com suffix and are in the admin group to authenticate It also gives the resulting Vault token a time to live of 1 hour and the writer policy Refer to the SAML API documentation vault api docs auth saml for a complete list of configuration options Assertion consumer service URLs The acs urls vault api docs auth saml acs urls configuration parameter determines where the SAML response will be sent after users authenticate with the configured SAML identity provider in their browser The values provided to Vault must Match or be a subset of the configured values for the SAML application within the configured identity provider Be directed to the auth method s assertion consumer service callback vault api docs auth saml assertion consumer service callback API Note It is highly recommended and enforced by some identity providers to TLS protect the assertion consumer service URLs A warning will be returned from Vault if any of the configured assertion consumer service URLs are not protected by TLS Note Configuration for replication To support a single auth method mount being used across Vault replication vault docs enterprise replication clusters acs urls supports configuration of multiple values For example to support SAML authentication on a primary and secondary Vault cluster the following acs urls configuration could be given shell session vault write auth saml config acs urls https primary vault v1 auth saml callback https secondary vault v1 auth saml callback The Vault UI and CLI will automatically request the proper assertion consumer service URL for the cluster they re configured to communicate with This means that the entirety of the authentication flow will stay within the targeted cluster Configuration for namespaces The SAML auth method can be used within Vault namespaces vault docs enterprise namespaces The assertion consumer service URLs configured in both Vault and the identity provider must include the namespace path segment The following table provides assertion consumer service URLs given different namespace paths Namespace path Assertion consumer service URL admin https my vault v1 admin auth saml callback org security https my vault v1 org security auth saml callback Bound attributes Once the user has been authenticated the authorization flow will validate that both the bound subjects vault api docs auth saml bound subjects and bound attributes vault api docs auth saml bound attributes match expected values configured for the role This can be used to restrict access to Vault for a subset of users in the SAML identity provider For example a role with bound subjects hashicorp com and bound attributes groups support engineering will only authorize users whose subject has an hashicorp com suffix and that are in either the support or engineering group Whether it should be an exact match or interpret as a wildcard can be controlled by the bound subjects type vault api docs auth saml bound subjects type and bound attributes type vault api docs auth saml bound attributes type parameters Bound attributes for the Microsoft identity platform The bound attributes for the Microsoft identity platform requires http schemas microsoft com ws 2008 06 identity claims groups as the attribute name along with your group membership values For example a role with bound attributes http schemas microsoft com ws 2008 06 identity claims groups GROUP1 OBJECT ID GROUP2 OBJECT ID will only authorize users that are in either the GROUP1 OBJECT ID or GROUP2 OBJECT ID group You can read more at the Microsoft identity platform s SAML token claims reference https learn microsoft com en us entra identity platform reference saml tokens API The SAML authentication plugin has a full HTTP API Refer to the SAML API documentation vault api docs auth saml for more details |
vault page title Link Active Directory SAML groups to Vault Federation Services AD FS as a SAML provider Connect Vault policies to Active Directory groups with Active Directory layout docs Link Active Directory SAML groups to Vault | ---
layout: docs
page_title: Link Active Directory SAML groups to Vault
description: >-
Connect Vault policies to Active Directory groups with Active Directory
Federation Services (AD FS) as a SAML provider.
---
# Link Active Directory SAML groups to Vault
@include 'alerts/enterprise-and-hcp.mdx'
Configure your Vault instance to link your Active Directory groups to Vault
policies with SAML.
## Before you start
- **You must have Vault Enterprise or HCP Vault v1.15.5+**.
- **You must be running AD FS on Windows Server**.
- **You must have a [SAML plugin configured for AD FS](/vault/docs/auth/saml/adfs)**.
- **You must have a Vault admin token**. If you do not have a valid admin
token, you can generate a new token in the Vault GUI or using
[`vault token create`](/vault/docs/commands/token/create) with the Vault CLI.
## Step 1: Enable a `kv` plugin instance for AD clients
<Tabs>
<Tab heading="Vault CLI" group="cli">
Enable an instance of the KV secret engine for AD FS under a custom path:
```shell-session
$ vault secrets enable -path=<ADFS_KV_PLUGIN_PATH> kv-v2
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault secrets enable -path=adfs-kv kv-v2
```
</CodeBlockConfig>
</Tab>
<Tab heading="Vault GUI" group="gui">
@include 'gui-instructions/enable-secrets-plugin.mdx'
- Enable the KV plugin:
1. Select the **KV** token.
1. Set a mount path that reflects the plugin purpose. For example: `dfs-kv`.
1. Click **Enable engine**.
</Tab>
</Tabs>
## Step 2: Create a read-only policy for the `kv` plugin
<Tabs>
<Tab heading="Vault CLI" group="cli">
Use `vault write` to create a read-only policy for AD FS clients that use the
new KV plugin:
```shell-session
$ vault policy write <RO_ADFS_POLICY_NAME> - << EOF
# Read and list policy for the AD FS KV mount
path "<ADFS_KV_PLUGIN_PATH>/*" {
capabilities = ["read", "list"]
}
EOF
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault policy write ro-saml-adfs - << EOF
# Read and list policy for the AD FS KV mount
path "adfs-kv/*" {
capabilities = ["read", "list"]
}
EOF
```
</CodeBlockConfig>
</Tab>
<Tab heading="Vault GUI" group="gui">
@include 'gui-instructions/create-acl-policy.mdx'
- Set the policy details and click **Create policy**:
- **Name**: "ro-saml-adfs"
- **Policy**:
```hcl
# Read and list policy for the AD FS KV mount
path "<ADFS_KV_PLUGIN_PATH>/*" {
capabilities = ["read", "list"]
}
```
</Tab>
</Tabs>
## Step 3: Create and link a Vault group to AD
<Tabs>
<Tab heading="Vault CLI" group="cli">
1. Create an external group in Vault and save the group ID to a file named
`group_id.txt`:
```shell-session
$ vault write \
-format=json \
identity/group name="SamlVaultReader" \
policies="ro-adfs-test" \
type="external" | jq -r ".data.id" > group_id.txt
```
1. Retrieve the mount accessor for the AD FS authentication method and save it
to a file named `accessor_adfs.txt`:
```shell-session
$ vault auth list -format=json | \
jq -r '.["<SAML_PLUGIN_PATH>/"].accessor' > \
accessor_adfs.txt
```
1. Create a group alias:
```shell-session
$ vault write identity/group-alias \
name="<YOUR_EXISTING_AD_GROUP>" \
mount_accessor=$(cat accessor_adfs.txt) \
canonical_id="$(cat group_id.txt)"
```
</Tab>
<Tab heading="Vault GUI" group="gui">
@include 'gui-instructions/create-group.mdx'
- Follow the prompts to create an external group with the following
information:
- Name: your new Vault group name
- Type: `external`
- Policies: the read-only AD FS policy you created. For example,
`ro-adfs-test`.
- Click **Add alias** and follow the prompts to map the Vault group name to an
existing group in Active Directory:
- Name: the name of an existing AD group (**must match exactly**).
- Auth Backend: `<SAML_PLUGIN_PATH>/ (saml)`
</Tab>
</Tabs>
## Step 4: Verify the link to Active Directory
1. Use the Vault CLI to login as an Active Directory user who is a member of
the linked Active Directory group:
```shell-session
$ vault login -method saml -path <SAML_PLUGIN_PATH>
```
1. Read your test value from the KV plugin:
```shell-session
$ vault kv get adfs-kv/test
``` | vault | layout docs page title Link Active Directory SAML groups to Vault description Connect Vault policies to Active Directory groups with Active Directory Federation Services AD FS as a SAML provider Link Active Directory SAML groups to Vault include alerts enterprise and hcp mdx Configure your Vault instance to link your Active Directory groups to Vault policies with SAML Before you start You must have Vault Enterprise or HCP Vault v1 15 5 You must be running AD FS on Windows Server You must have a SAML plugin configured for AD FS vault docs auth saml adfs You must have a Vault admin token If you do not have a valid admin token you can generate a new token in the Vault GUI or using vault token create vault docs commands token create with the Vault CLI Step 1 Enable a kv plugin instance for AD clients Tabs Tab heading Vault CLI group cli Enable an instance of the KV secret engine for AD FS under a custom path shell session vault secrets enable path ADFS KV PLUGIN PATH kv v2 For example CodeBlockConfig hideClipboard shell session vault secrets enable path adfs kv kv v2 CodeBlockConfig Tab Tab heading Vault GUI group gui include gui instructions enable secrets plugin mdx Enable the KV plugin 1 Select the KV token 1 Set a mount path that reflects the plugin purpose For example dfs kv 1 Click Enable engine Tab Tabs Step 2 Create a read only policy for the kv plugin Tabs Tab heading Vault CLI group cli Use vault write to create a read only policy for AD FS clients that use the new KV plugin shell session vault policy write RO ADFS POLICY NAME EOF Read and list policy for the AD FS KV mount path ADFS KV PLUGIN PATH capabilities read list EOF For example CodeBlockConfig hideClipboard shell session vault policy write ro saml adfs EOF Read and list policy for the AD FS KV mount path adfs kv capabilities read list EOF CodeBlockConfig Tab Tab heading Vault GUI group gui include gui instructions create acl policy mdx Set the policy details and click Create policy Name ro saml adfs Policy hcl Read and list policy for the AD FS KV mount path ADFS KV PLUGIN PATH capabilities read list Tab Tabs Step 3 Create and link a Vault group to AD Tabs Tab heading Vault CLI group cli 1 Create an external group in Vault and save the group ID to a file named group id txt shell session vault write format json identity group name SamlVaultReader policies ro adfs test type external jq r data id group id txt 1 Retrieve the mount accessor for the AD FS authentication method and save it to a file named accessor adfs txt shell session vault auth list format json jq r SAML PLUGIN PATH accessor accessor adfs txt 1 Create a group alias shell session vault write identity group alias name YOUR EXISTING AD GROUP mount accessor cat accessor adfs txt canonical id cat group id txt Tab Tab heading Vault GUI group gui include gui instructions create group mdx Follow the prompts to create an external group with the following information Name your new Vault group name Type external Policies the read only AD FS policy you created For example ro adfs test Click Add alias and follow the prompts to map the Vault group name to an existing group in Active Directory Name the name of an existing AD group must match exactly Auth Backend SAML PLUGIN PATH saml Tab Tabs Step 4 Verify the link to Active Directory 1 Use the Vault CLI to login as an Active Directory user who is a member of the linked Active Directory group shell session vault login method saml path SAML PLUGIN PATH 1 Read your test value from the KV plugin shell session vault kv get adfs kv test |
vault policies when using Active Directory Federation Services ADFS as an SAML provider page title Troubleshoot ADFS and SAML automatic group mapping fails layout docs Fix connection problems in Vault due to a bad mapping between groups and Automatic group mapping fails | ---
layout: docs
page_title: "Troubleshoot ADFS and SAML: automatic group mapping fails"
description: >-
Fix connection problems in Vault due to a bad mapping between groups and
policies when using Active Directory Federation Services (ADFS) as an SAML
provider.
---
# Automatic group mapping fails
Troubleshoot problems where the debugging data suggests a bad or nonexistent
mapping between your Vault role and AD FS the Claim Issuance Policy.
## Example debugging data
<CodeBlockConfig hideClipboard highlight="14,16,21">
```json
[DEBUG] auth.saml.auth_saml_1d2227e7: validating user context for role: api=callback role_name=default-saml
role="{
"token_bound_cidrs":null,
"token_explicit_max_ttl":0,
"token_max_ttl":0,
"token_no_default_policy":false,
"token_num_uses":0,
"token_period":0,
"token_policies":["default"],
"token_type":0,
"token_ttl":0,
"BoundSubjects":["*@example.com","*@ext.example.com"],
"BoundSubjectsType":"glob",
"BoundAttributes":{"http://schemas.xmlsoap.org/claims/Group":["VaultAdmin","VaultUser"]},
"BoundAttributesType":"string",
"GroupsAttribute":"groups"
}"
user context="{
"attributes":
{
"http://schemas.xmlsoap.org/claims/Group":["Domain Users","VaultAdmin"],
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress":["[email protected]"]
},
"subject":"[email protected]"
}"
```
</CodeBlockConfig>
## Analysis
Use `vault read` to review the current role configuration:
<CodeBlockConfig hideClipboard highlight="5,9">
```shell-session
$ vault read auth/<SAML_PLUGIN_PATH>/role/<ADFS_ROLE>
Key Value
--- -----
bound_attributes map[http://schemas.xmlsoap.org/claims/Group:[VaultAdmin VaultUser]]
bound_attributes_type string
bound_subjects [*@example.com *@ext.example.com]
bound_subjects_type glob
groups_attribute groups
token_bound_cidrs []
token_explicit_max_ttl 0s
token_max_ttl 0s
token_no_default_policy false
token_num_uses 0
token_period 0s
token_policies [default]
token_ttl 0s
token_type default
```
</CodeBlockConfig>
The Vault role uses `groups` for the group attribute, so Vault expects user
context in the SAML response to include a `groups` attribute with the form:
<CodeBlockConfig hideClipboard>
```text
user context="{
"attributes":
{
"groups":[<LIST_OF_BOUND_GROUPS>]",
...
}
}"
```
</CodeBlockConfig>
But the SAML response indicates the Claim Issuance Policy uses `Group` for the
group attribute, so the user context uses `Group` to key the bound groups:
<CodeBlockConfig hideClipboard>
```text
user context="{
"attributes":
{
"http://schemas.xmlsoap.org/claims/Group":["Domain Users","VaultAdmin"],
...
},
"subject":"[email protected]"
}"
```
</CodeBlockConfig>
## Solution
<Tabs>
<Tab heading="Option 1: Use 'Group' in the Vault role">
The first option to resolve the problem is update `group_attribute` for the
Vault role to use `Group`:
```shell-session
$ vault write auth/<SAML_PLUGIN_PATH>/role/<ADFS_ROLE> \
groups_attribute=http://schemas.xmlsoap.org/claims/Group
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault write auth/saml/role/adfs-default \
groups_attribute=http://schemas.xmlsoap.org/claims/Group
```
</CodeBlockConfig>
</Tab>
<Tab heading="Option 2: Use 'groups' for AD FS">
The second option to resolve the problem is to update your AD FS configuration
to use `groups` and confirm the bound attributes in Vault match the expected
groups:
1. Update your AD FS the Claim Issuance Policy to use `groups` for unqualified
names:
| LDAP attribute | Outgoing claim type
|------------------------------------|--------------------
| `Token-Groups - Unqualified Names` | `groups`
1. Verify the bound attribute for your Vault role match the groups listed in the
SAML response:
```shell-session
$ vault write auth/<SAML_PLUGIN_PATH>/role/<ADFS_ROLE> \
bound_attributes=groups="<AD_GROUP_LIST>"
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault write auth/saml/role/default-adfs \
bound_attributes=groups="VaultAdmin,VaultUser"
```
</CodeBlockConfig>
</Tab>
</Tabs>
## Additional resources
- [SAML auth method Documentation](/vault/docs/auth/saml)
- [SAML API Documentation](/vault/api-docs/auth/saml)
- [Set up an AD FS lab environment](https://learn.microsoft.com/en-us/windows-server/identity/ad-fs/operations/set-up-an-ad-fs-lab-environment) | vault | layout docs page title Troubleshoot ADFS and SAML automatic group mapping fails description Fix connection problems in Vault due to a bad mapping between groups and policies when using Active Directory Federation Services ADFS as an SAML provider Automatic group mapping fails Troubleshoot problems where the debugging data suggests a bad or nonexistent mapping between your Vault role and AD FS the Claim Issuance Policy Example debugging data CodeBlockConfig hideClipboard highlight 14 16 21 json DEBUG auth saml auth saml 1d2227e7 validating user context for role api callback role name default saml role token bound cidrs null token explicit max ttl 0 token max ttl 0 token no default policy false token num uses 0 token period 0 token policies default token type 0 token ttl 0 BoundSubjects example com ext example com BoundSubjectsType glob BoundAttributes http schemas xmlsoap org claims Group VaultAdmin VaultUser BoundAttributesType string GroupsAttribute groups user context attributes http schemas xmlsoap org claims Group Domain Users VaultAdmin http schemas xmlsoap org ws 2005 05 identity claims emailaddress rs example com subject rs example com CodeBlockConfig Analysis Use vault read to review the current role configuration CodeBlockConfig hideClipboard highlight 5 9 shell session vault read auth SAML PLUGIN PATH role ADFS ROLE Key Value bound attributes map http schemas xmlsoap org claims Group VaultAdmin VaultUser bound attributes type string bound subjects example com ext example com bound subjects type glob groups attribute groups token bound cidrs token explicit max ttl 0s token max ttl 0s token no default policy false token num uses 0 token period 0s token policies default token ttl 0s token type default CodeBlockConfig The Vault role uses groups for the group attribute so Vault expects user context in the SAML response to include a groups attribute with the form CodeBlockConfig hideClipboard text user context attributes groups LIST OF BOUND GROUPS CodeBlockConfig But the SAML response indicates the Claim Issuance Policy uses Group for the group attribute so the user context uses Group to key the bound groups CodeBlockConfig hideClipboard text user context attributes http schemas xmlsoap org claims Group Domain Users VaultAdmin subject rs example com CodeBlockConfig Solution Tabs Tab heading Option 1 Use Group in the Vault role The first option to resolve the problem is update group attribute for the Vault role to use Group shell session vault write auth SAML PLUGIN PATH role ADFS ROLE groups attribute http schemas xmlsoap org claims Group For example CodeBlockConfig hideClipboard shell session vault write auth saml role adfs default groups attribute http schemas xmlsoap org claims Group CodeBlockConfig Tab Tab heading Option 2 Use groups for AD FS The second option to resolve the problem is to update your AD FS configuration to use groups and confirm the bound attributes in Vault match the expected groups 1 Update your AD FS the Claim Issuance Policy to use groups for unqualified names LDAP attribute Outgoing claim type Token Groups Unqualified Names groups 1 Verify the bound attribute for your Vault role match the groups listed in the SAML response shell session vault write auth SAML PLUGIN PATH role ADFS ROLE bound attributes groups AD GROUP LIST For example CodeBlockConfig hideClipboard shell session vault write auth saml role default adfs bound attributes groups VaultAdmin VaultUser CodeBlockConfig Tab Tabs Additional resources SAML auth method Documentation vault docs auth saml SAML API Documentation vault api docs auth saml Set up an AD FS lab environment https learn microsoft com en us windows server identity ad fs operations set up an ad fs lab environment |
vault Learn about Vault s plugin architecture page title External Plugin Architecture External plugin architecture layout docs executes and communicates with over RPC This means the plugin process does not Vault s external plugins are completely separate standalone applications that Vault | ---
layout: docs
page_title: External Plugin Architecture
description: Learn about Vault's plugin architecture.
---
# External plugin architecture
Vault's external plugins are completely separate, standalone applications that Vault
executes and communicates with over RPC. This means the plugin process does not
share the same memory space as Vault and therefore can only access the
interfaces and arguments given to it. This also means a crash in a plugin cannot
crash the entirety of Vault.
It is possible to enable a custom plugin with a name that's identical to a
built-in plugin. In such a situation, Vault will always choose the custom plugin
when enabling it.
-> **NOTE:** See the [Vault Integrations](/vault/integrations) page to find a
curated collection of official, partner, and community Vault plugins.
## External plugin lifecycle
Vault external plugins are long-running processes that remain running once they are
spawned by Vault, the parent process. Plugin processes can be started by Vault's
active node and performance standby nodes. Additionally, there are cases where
plugin processes may be terminated by Vault. These cases include, but are not
limited to:
- Vault active node step-down
- Vault barrier seal
- Vault graceful shutdown
- Disabling a Secrets Engine or Auth method that uses external plugins
- Database configured connection deletion
- Database configured connection update
- Database configured connection reset request
- Database root credentials rotation
- WAL Rollback from a previously failed root credentials rotation operation
The lifecycle of plugin processes are managed automatically by Vault.
Termination of these processes are typical in certain scenarios, such as the
ones listed above. Vault will start plugin processes when they are enabled. A
plugin process may be started or terminated through other internal processes
within Vault as well. Since Vault manages and tracks the lifecycle of its
plugins, these processes should not be terminated by anything other than Vault.
If a plugin process is shutdown out-of-band, the plugin process will be lazily
loaded when a request that requires the plugin is received by Vault.
### External plugin scaling characteristics
External plugins can leverage [Performance Standbys](/vault/docs/enterprise/performance-standby)
without any explicit action by a plugin author. The default behavior of Vault
Enterprise is to attempt to handle all requests, including requests to plugins,
on performance standbys. If the plugin request makes any attempt to modify
storage, the request will receive a readonly error, and the request routing
code will then forward the full original request transparently to the active
node. In other words, plugins can scale horizontally on Vault Enterprise
without any effort on the plugin author's part.
## Plugin communication
Vault communicates with external plugins over RPC. To secure this
communication, Vault creates a mutually authenticated TLS connection with the
plugin's RPC server. Plugins make use of the AutoMTLS feature of
[go-plugin](https://www.github.com/hashicorp/go-plugin) which will
automatically negotiate mutual TLS for transport authentication.
The [`api_addr`](/vault/docs/configuration#api_addr) must be set in order for the
plugin process to establish communication with the Vault server during mount
time. If the storage backend has HA enabled and supports automatic host address
detection (e.g. Consul), Vault will automatically attempt to determine the
`api_addr` as well.
~> Note: Prior to Vault version 1.9.2, reading the original connection's TLS
connection state is not supported in plugins.
## Plugin registration
An important consideration of Vault's plugin system is to ensure the plugin
invoked by Vault is authentic and maintains integrity. There are two components
that a Vault operator needs to configure before external plugins can be run- the
plugin directory and the plugin catalog entry.
### Plugin directory
The plugin directory is a configuration option of Vault and can be specified in
the [configuration file](/vault/docs/configuration).
This setting specifies a directory in which all plugin binaries must live;
_this value cannot be a symbolic link_. A plugin
cannot be added to Vault unless it exists in the plugin directory. There is no
default for this configuration option, and if it is not set, plugins cannot be
added to Vault.
@include 'plugin-file-permissions-check.mdx'
### Plugin catalog
The plugin catalog is Vault's list of approved plugins. The catalog is stored in
Vault's barrier and can only be updated by a Vault user with sudo permissions.
Upon adding a new plugin, the plugin name, SHA256 sum of the executable, and the
command that should be used to run the plugin must be provided. The catalog will
ensure the executable referenced in the command exists in the plugin
directory. When added to the catalog, the plugin is not automatically executed,
but becomes visible to backends and can be executed by them. For more
information on the plugin catalog please see the [Plugin Catalog API
docs](/vault/api-docs/system/plugins-catalog).
An example of plugin registration in current versions of Vault:
```shell-session
$ vault plugin register -sha256=<SHA256 Hex value of the plugin binary> \
secret \ # type
myplugin-database-plugin
Success! Registered plugin: myplugin-database-plugin
```
Vault versions prior to v0.10.4 lacked the `vault plugin` operator and the
registration step for them is:
```shell-session
$ vault write sys/plugins/catalog/database/myplugin-database-plugin \
sha256=<SHA256 Hex value of the plugin binary> \
command="myplugin"
Success! Data written to: sys/plugins/catalog/database/myplugin-database-plugin
```
### Plugin execution
When a backend wants to run a plugin, it first looks up the plugin, by name, in
the catalog. It then checks the executable's SHA256 sum against the one
configured in the plugin catalog. Finally Vault runs the command configured in
the catalog, sending along the JWT formatted response wrapping token and mlock
settings. Like Vault, plugins support [the use of mlock when available](/vault/docs/configuration#disable_mlock).
~> Note: If Vault is configured with `mlock` enabled, then the Vault executable
and each plugin executable in your [plugins directory](/vault/docs/plugins/plugin-architecture#plugin-directory)
must be given the ability to use the `mlock` syscall.
### Plugin upgrades
External plugins may be updated by registering and reloading them. More details
on the upgrade procedure can be found in
[Upgrading Vault Plugins](/vault/docs/upgrading/plugins).
## Plugin multiplexing
To avoid spawning multiple plugin processes for mounts of the same type,
plugins can implement plugin multiplexing. This allows a single
plugin process to be used for multiple mounts of a given type. This single
process will be multiplexed across all Vault namespaces for mounts of this
type. Multiplexing a plugin does not affect the current behavior of existing
plugins.
To enable multiplexing, the plugin must be compiled with the `ServeMultiplex`
function call from Vault's respective `plugin` or `dbplugin` SDK packages. At
this time, there is no opt-out capability for plugins that implement
multiplexing. To use a non-multiplexed plugin, run an older version of the
plugin, i.e., the plugin calls the `Serve` function.
More resources on implementing plugin multiplexing:
* [Database secrets engines](/vault/docs/secrets/databases/custom#serving-a-plugin-with-multiplexing)
* [Secrets engines and auth methods](/vault/docs/plugins/plugin-development)
## Troubleshooting
### Unrecognized remote plugin message
If the following error is encountered when enabling a plugin secret engine or
auth method:
<CodeBlockConfig hideClipboard>
```sh
Unrecognized remote plugin message:
This usually means that the plugin is either invalid or simply
needs to be recompiled to support the latest protocol.
```
</CodeBlockConfig>
Verify whether the Vault process has `mlock` enabled, and if so, run the
following command against the plugin binary:
```shell-session
$ sudo setcap cap_ipc_lock=+ep <plugin-binary>
```
| vault | layout docs page title External Plugin Architecture description Learn about Vault s plugin architecture External plugin architecture Vault s external plugins are completely separate standalone applications that Vault executes and communicates with over RPC This means the plugin process does not share the same memory space as Vault and therefore can only access the interfaces and arguments given to it This also means a crash in a plugin cannot crash the entirety of Vault It is possible to enable a custom plugin with a name that s identical to a built in plugin In such a situation Vault will always choose the custom plugin when enabling it NOTE See the Vault Integrations vault integrations page to find a curated collection of official partner and community Vault plugins External plugin lifecycle Vault external plugins are long running processes that remain running once they are spawned by Vault the parent process Plugin processes can be started by Vault s active node and performance standby nodes Additionally there are cases where plugin processes may be terminated by Vault These cases include but are not limited to Vault active node step down Vault barrier seal Vault graceful shutdown Disabling a Secrets Engine or Auth method that uses external plugins Database configured connection deletion Database configured connection update Database configured connection reset request Database root credentials rotation WAL Rollback from a previously failed root credentials rotation operation The lifecycle of plugin processes are managed automatically by Vault Termination of these processes are typical in certain scenarios such as the ones listed above Vault will start plugin processes when they are enabled A plugin process may be started or terminated through other internal processes within Vault as well Since Vault manages and tracks the lifecycle of its plugins these processes should not be terminated by anything other than Vault If a plugin process is shutdown out of band the plugin process will be lazily loaded when a request that requires the plugin is received by Vault External plugin scaling characteristics External plugins can leverage Performance Standbys vault docs enterprise performance standby without any explicit action by a plugin author The default behavior of Vault Enterprise is to attempt to handle all requests including requests to plugins on performance standbys If the plugin request makes any attempt to modify storage the request will receive a readonly error and the request routing code will then forward the full original request transparently to the active node In other words plugins can scale horizontally on Vault Enterprise without any effort on the plugin author s part Plugin communication Vault communicates with external plugins over RPC To secure this communication Vault creates a mutually authenticated TLS connection with the plugin s RPC server Plugins make use of the AutoMTLS feature of go plugin https www github com hashicorp go plugin which will automatically negotiate mutual TLS for transport authentication The api addr vault docs configuration api addr must be set in order for the plugin process to establish communication with the Vault server during mount time If the storage backend has HA enabled and supports automatic host address detection e g Consul Vault will automatically attempt to determine the api addr as well Note Prior to Vault version 1 9 2 reading the original connection s TLS connection state is not supported in plugins Plugin registration An important consideration of Vault s plugin system is to ensure the plugin invoked by Vault is authentic and maintains integrity There are two components that a Vault operator needs to configure before external plugins can be run the plugin directory and the plugin catalog entry Plugin directory The plugin directory is a configuration option of Vault and can be specified in the configuration file vault docs configuration This setting specifies a directory in which all plugin binaries must live this value cannot be a symbolic link A plugin cannot be added to Vault unless it exists in the plugin directory There is no default for this configuration option and if it is not set plugins cannot be added to Vault include plugin file permissions check mdx Plugin catalog The plugin catalog is Vault s list of approved plugins The catalog is stored in Vault s barrier and can only be updated by a Vault user with sudo permissions Upon adding a new plugin the plugin name SHA256 sum of the executable and the command that should be used to run the plugin must be provided The catalog will ensure the executable referenced in the command exists in the plugin directory When added to the catalog the plugin is not automatically executed but becomes visible to backends and can be executed by them For more information on the plugin catalog please see the Plugin Catalog API docs vault api docs system plugins catalog An example of plugin registration in current versions of Vault shell session vault plugin register sha256 SHA256 Hex value of the plugin binary secret type myplugin database plugin Success Registered plugin myplugin database plugin Vault versions prior to v0 10 4 lacked the vault plugin operator and the registration step for them is shell session vault write sys plugins catalog database myplugin database plugin sha256 SHA256 Hex value of the plugin binary command myplugin Success Data written to sys plugins catalog database myplugin database plugin Plugin execution When a backend wants to run a plugin it first looks up the plugin by name in the catalog It then checks the executable s SHA256 sum against the one configured in the plugin catalog Finally Vault runs the command configured in the catalog sending along the JWT formatted response wrapping token and mlock settings Like Vault plugins support the use of mlock when available vault docs configuration disable mlock Note If Vault is configured with mlock enabled then the Vault executable and each plugin executable in your plugins directory vault docs plugins plugin architecture plugin directory must be given the ability to use the mlock syscall Plugin upgrades External plugins may be updated by registering and reloading them More details on the upgrade procedure can be found in Upgrading Vault Plugins vault docs upgrading plugins Plugin multiplexing To avoid spawning multiple plugin processes for mounts of the same type plugins can implement plugin multiplexing This allows a single plugin process to be used for multiple mounts of a given type This single process will be multiplexed across all Vault namespaces for mounts of this type Multiplexing a plugin does not affect the current behavior of existing plugins To enable multiplexing the plugin must be compiled with the ServeMultiplex function call from Vault s respective plugin or dbplugin SDK packages At this time there is no opt out capability for plugins that implement multiplexing To use a non multiplexed plugin run an older version of the plugin i e the plugin calls the Serve function More resources on implementing plugin multiplexing Database secrets engines vault docs secrets databases custom serving a plugin with multiplexing Secrets engines and auth methods vault docs plugins plugin development Troubleshooting Unrecognized remote plugin message If the following error is encountered when enabling a plugin secret engine or auth method CodeBlockConfig hideClipboard sh Unrecognized remote plugin message This usually means that the plugin is either invalid or simply needs to be recompiled to support the latest protocol CodeBlockConfig Verify whether the Vault process has mlock enabled and if so run the following command against the plugin binary shell session sudo setcap cap ipc lock ep plugin binary |
vault Plugin management External Plugins are mountable backends that are implemented using Vault s page title Plugin Management plugin system layout docs | ---
layout: docs
page_title: Plugin Management
description: >-
External Plugins are mountable backends that are implemented using Vault's
plugin system.
---
# Plugin management
External plugins are the components in Vault that can be implemented separately
from Vault's built-in plugins. These plugins can be either authentication or
secrets engines.
The [`api_addr`][api_addr] must be set in order for the plugin process to
establish communication with the Vault server during mount time. If the storage
backend has HA enabled and supports automatic host address detection (e.g.
Consul), Vault will automatically attempt to determine the `api_addr` as well.
Detailed information regarding the plugin system can be found in the
[internals documentation](/vault/docs/plugins).
## Registering external plugins
Before an external plugin can be mounted, it needs to be
[registered](/vault/docs/plugins/plugin-architecture#plugin-registration) in the
plugin catalog to ensure the plugin invoked by Vault is authentic and maintains
integrity:
```shell-session
$ vault plugin register -sha256=<SHA256 Hex value of the plugin binary> \
secret \ # type
passthrough-plugin
Success! Registered plugin: passthrough-plugin
```
## Enabling/Disabling external plugins
After the plugin is registered, it can be mounted by specifying the registered
plugin name:
```shell-session
$ vault secrets enable -path=my-secrets passthrough-plugin
Success! Enabled the passthrough-plugin secrets engine at: my-secrets/
```
Listing secrets engines will display secrets engines that are mounted as
plugins:
```shell-session
$ vault secrets list
Path Type Accessor Plugin Default TTL Max TTL Force No Cache Replication Behavior Description
my-secrets/ plugin plugin_deb84140 passthrough-plugin system system false replicated
```
Disabling an external plugins is identical to disabling a built-in plugin:
```shell-session
$ vault secrets disable my-secrets
```
## Upgrading plugins
Upgrade instructions can be found in the [Upgrading Plugins - Guides][upgrading_plugins]
page.
[api_addr]: /vault/docs/configuration#api_addr
[upgrading_plugins]: /vault/docs/upgrading/plugins
## Plugin environment variables
An advantage for external plugins over builtin plugins is they can specify
additional environment variables because they are run in their own process.
-> Vault 1.16.0 changed the precedence given to plugin-specific environment
variables so they take priority over Vault's environment. See full details in
the [upgrade notes](/vault/docs/upgrading/upgrade-to-1.16.x).
Use the `-env` flag once per environment variable that a plugin should be
started with:
```shell-session
$ vault plugin register -sha256=<SHA256 Hex value of the plugin binary> \
-env REGION=eu \
-env TOKEN_FILE=/var/run/token \
secret \ # type
passthrough-plugin
Success! Registered plugin: passthrough-plugin
```
### Plugin-specific HTTP proxy settings
Many tools and libraries automatically consume `HTTP_PROXY`, `HTTPS_PROXY`, and
`NO_PROXY` environment variables to configure HTTP proxy settings, including the
Go standard library's default HTTP client. You can use these environment
variables to configure different network proxies for different plugins:
-> You must be using an external plugin to take advantage of custom environment
variables. If you are using a builtin plugin, you can still download and register
an external version of it in order to use this workflow. Check the
[releases](https://releases.hashicorp.com/) page for the latest prebuilt plugin
binaries.
```shell-session
$ vault plugin register -sha256=<SHA256 Hex value of the plugin binary> \
-env HTTP_PROXY=eu.example.com \
auth \
jwt-eu
Success! Registered plugin: jwt-eu
$ vault plugin register -sha256=<SHA256 Hex value of the plugin binary> \
-env HTTP_PROXY=us.example.com \
auth \
jwt-us
Success! Registered plugin: jwt-us
```
You can then enable each plugin on its own path, and configure clients that
should be associated with one or the other appropriately:
```shell-session
$ vault auth enable jwt-eu
Success! Enabled the jwt-eu auth method at: auth/jwt-eu/
$ vault auth enable jwt-us
Success! Enabled the jwt-us auth method at: auth/jwt-us/
``` | vault | layout docs page title Plugin Management description External Plugins are mountable backends that are implemented using Vault s plugin system Plugin management External plugins are the components in Vault that can be implemented separately from Vault s built in plugins These plugins can be either authentication or secrets engines The api addr api addr must be set in order for the plugin process to establish communication with the Vault server during mount time If the storage backend has HA enabled and supports automatic host address detection e g Consul Vault will automatically attempt to determine the api addr as well Detailed information regarding the plugin system can be found in the internals documentation vault docs plugins Registering external plugins Before an external plugin can be mounted it needs to be registered vault docs plugins plugin architecture plugin registration in the plugin catalog to ensure the plugin invoked by Vault is authentic and maintains integrity shell session vault plugin register sha256 SHA256 Hex value of the plugin binary secret type passthrough plugin Success Registered plugin passthrough plugin Enabling Disabling external plugins After the plugin is registered it can be mounted by specifying the registered plugin name shell session vault secrets enable path my secrets passthrough plugin Success Enabled the passthrough plugin secrets engine at my secrets Listing secrets engines will display secrets engines that are mounted as plugins shell session vault secrets list Path Type Accessor Plugin Default TTL Max TTL Force No Cache Replication Behavior Description my secrets plugin plugin deb84140 passthrough plugin system system false replicated Disabling an external plugins is identical to disabling a built in plugin shell session vault secrets disable my secrets Upgrading plugins Upgrade instructions can be found in the Upgrading Plugins Guides upgrading plugins page api addr vault docs configuration api addr upgrading plugins vault docs upgrading plugins Plugin environment variables An advantage for external plugins over builtin plugins is they can specify additional environment variables because they are run in their own process Vault 1 16 0 changed the precedence given to plugin specific environment variables so they take priority over Vault s environment See full details in the upgrade notes vault docs upgrading upgrade to 1 16 x Use the env flag once per environment variable that a plugin should be started with shell session vault plugin register sha256 SHA256 Hex value of the plugin binary env REGION eu env TOKEN FILE var run token secret type passthrough plugin Success Registered plugin passthrough plugin Plugin specific HTTP proxy settings Many tools and libraries automatically consume HTTP PROXY HTTPS PROXY and NO PROXY environment variables to configure HTTP proxy settings including the Go standard library s default HTTP client You can use these environment variables to configure different network proxies for different plugins You must be using an external plugin to take advantage of custom environment variables If you are using a builtin plugin you can still download and register an external version of it in order to use this workflow Check the releases https releases hashicorp com page for the latest prebuilt plugin binaries shell session vault plugin register sha256 SHA256 Hex value of the plugin binary env HTTP PROXY eu example com auth jwt eu Success Registered plugin jwt eu vault plugin register sha256 SHA256 Hex value of the plugin binary env HTTP PROXY us example com auth jwt us Success Registered plugin jwt us You can then enable each plugin on its own path and configure clients that should be associated with one or the other appropriately shell session vault auth enable jwt eu Success Enabled the jwt eu auth method at auth jwt eu vault auth enable jwt us Success Enabled the jwt us auth method at auth jwt us |
vault Advanced topic Plugin development is a highly advanced topic in Vault and is not required knowledge for day to day usage If you don t plan on writing any Learn about Vault plugin development page title Plugin Development layout docs Plugin development | ---
layout: docs
page_title: Plugin Development
description: Learn about Vault plugin development.
---
# Plugin development
~> Advanced topic! Plugin development is a highly advanced topic in Vault, and
is not required knowledge for day-to-day usage. If you don't plan on writing any
plugins, we recommend not reading this section of the documentation.
Because Vault communicates to plugins over a RPC interface, you can build and
distribute a plugin for Vault without having to rebuild Vault itself. This makes
it easy for you to build a Vault plugin for your organization's internal use,
for a proprietary API that you don't want to open source, or to prototype
something before contributing it back to the main project.
In theory, because the plugin interface is HTTP, you could even develop a plugin
using a completely different programming language! (Disclaimer, you would also
have to re-implement the plugin API which is not a trivial amount of work.)
Developing a plugin is simple. The only knowledge necessary to write
a plugin is basic command-line skills and basic knowledge of the
[Go programming language](http://golang.org).
Your plugin implementation needs to satisfy the interface for the plugin
type you want to build. You can find these definitions in the docs for the
backend running the plugin.
~> Note: Plugins should be prepared to handle multiple concurrent requests
from Vault.
## Serving a plugin
### Serving a plugin with multiplexing
~> Plugin multiplexing requires `github.com/hashicorp/vault/sdk v0.5.4` or above.
The following code exhibits an example main package for a Vault plugin using
the Vault SDK for a secrets engine or auth method:
```go
package main
import (
"os"
myPlugin "your/plugin/import/path"
"github.com/hashicorp/vault/api"
"github.com/hashicorp/vault/sdk/plugin"
)
func main() {
apiClientMeta := &api.PluginAPIClientMeta{}
flags := apiClientMeta.FlagSet()
flags.Parse(os.Args[1:])
tlsConfig := apiClientMeta.GetTLSConfig()
tlsProviderFunc := api.VaultPluginTLSProvider(tlsConfig)
err := plugin.ServeMultiplex(&plugin.ServeOpts{
BackendFactoryFunc: myPlugin.Factory,
TLSProviderFunc: tlsProviderFunc,
})
if err != nil {
logger := hclog.New(&hclog.LoggerOptions{})
logger.Error("plugin shutting down", "error", err)
os.Exit(1)
}
}
```
And that's basically it! You would just need to change `myPlugin` to your actual
plugin.
### Plugin backwards compatibility with Vault
Let's take a closer look at a snippet from the above main package.
```go
err := plugin.ServeMultiplex(&plugin.ServeOpts{
BackendFactoryFunc: myPlugin.Factory,
TLSProviderFunc: tlsProviderFunc,
})
```
The call to `plugin.ServeMultiplex` ensures that the plugin will use
Vault's [plugin
multiplexing](/vault/docs/plugins/plugin-architecture#plugin-multiplexing) feature.
However, this plugin will not be multiplexed if it is run by a version of Vault
that does not support multiplexing. Vault will simply fall back to a plugin
version that it can run. Additionally, we set the `TLSProviderFunc` to ensure
that our plugin is backwards compatible with versions of Vault that do not
support automatic mutual TLS for secure [plugin
communication](/vault/docs/plugins/plugin-architecture#plugin-communication). If you
are certain your plugin does not need backwards compatibility, this field can
be omitted.
[api_addr]: /vault/docs/configuration#api_addr
## Leveraging plugin versioning
@include 'plugin-versioning.mdx'
Auth and secrets plugins based on `framework.Backend` from the SDK should set the
[`RunningVersion`](https://github.com/hashicorp/vault/blob/sdk/v0.6.0/sdk/framework/backend.go#L95-L96)
variable, and the framework will implement the version interface.
Database plugins have a smaller API than `framework.Backend` exposes, and should
instead implement the
[`PluginVersioner`](https://github.com/hashicorp/vault/blob/sdk/v0.6.0/sdk/logical/logical.go#L150-L154)
interface directly.
## Plugin logging
Auth and secrets plugins based on `framework.Backend` from the SDK can take
advantage of the SDK's [default logger](https://github.com/hashicorp/vault/blob/fe55cbbf05586ec4c0cd9bdf865ec6f741a8933c/sdk/framework/backend.go#L437).
No additional setup is required. The logger can be used like the following:
```go
func (b *backend) example() {
b.Logger().Trace("Trace level log")
b.Logger().Debug("Debug level log")
b.Logger().Info("Info level log")
b.Logger().Warn("Warn level log")
b.Logger().Error("Error level log")
}
```
See the source code of [vault-auth-plugin-example](https://github.com/hashicorp/vault-auth-plugin-example)
for a more complete example of a plugin using logging.
## Building a plugin from source
To build a plugin from source, first navigate to the location holding the
desired plugin version. Next, run `go build` to obtain a new binary for the
plugin. Finally,
[register](/vault/docs/plugins/plugin-architecture#plugin-registration) the
plugin and enable it.
## Plugin development - resources
For more information on how to register and enable your plugin, refer to the
[Building Plugin Backends](/vault/tutorials/app-integration/plugin-backends)
tutorial.
Other HashiCorp plugin development resources:
* [vault-auth-plugin-example](https://github.com/hashicorp/vault-auth-plugin-example)
* [Custom Secrets Engines](/vault/tutorials/custom-secrets-engine)
### Plugin development - resources - community
See the [Vault Integrations](/vault/integrations) page to find Community
plugin examples/guides developed by community members. HashiCorp does not
validate these for correctness. | vault | layout docs page title Plugin Development description Learn about Vault plugin development Plugin development Advanced topic Plugin development is a highly advanced topic in Vault and is not required knowledge for day to day usage If you don t plan on writing any plugins we recommend not reading this section of the documentation Because Vault communicates to plugins over a RPC interface you can build and distribute a plugin for Vault without having to rebuild Vault itself This makes it easy for you to build a Vault plugin for your organization s internal use for a proprietary API that you don t want to open source or to prototype something before contributing it back to the main project In theory because the plugin interface is HTTP you could even develop a plugin using a completely different programming language Disclaimer you would also have to re implement the plugin API which is not a trivial amount of work Developing a plugin is simple The only knowledge necessary to write a plugin is basic command line skills and basic knowledge of the Go programming language http golang org Your plugin implementation needs to satisfy the interface for the plugin type you want to build You can find these definitions in the docs for the backend running the plugin Note Plugins should be prepared to handle multiple concurrent requests from Vault Serving a plugin Serving a plugin with multiplexing Plugin multiplexing requires github com hashicorp vault sdk v0 5 4 or above The following code exhibits an example main package for a Vault plugin using the Vault SDK for a secrets engine or auth method go package main import os myPlugin your plugin import path github com hashicorp vault api github com hashicorp vault sdk plugin func main apiClientMeta api PluginAPIClientMeta flags apiClientMeta FlagSet flags Parse os Args 1 tlsConfig apiClientMeta GetTLSConfig tlsProviderFunc api VaultPluginTLSProvider tlsConfig err plugin ServeMultiplex plugin ServeOpts BackendFactoryFunc myPlugin Factory TLSProviderFunc tlsProviderFunc if err nil logger hclog New hclog LoggerOptions logger Error plugin shutting down error err os Exit 1 And that s basically it You would just need to change myPlugin to your actual plugin Plugin backwards compatibility with Vault Let s take a closer look at a snippet from the above main package go err plugin ServeMultiplex plugin ServeOpts BackendFactoryFunc myPlugin Factory TLSProviderFunc tlsProviderFunc The call to plugin ServeMultiplex ensures that the plugin will use Vault s plugin multiplexing vault docs plugins plugin architecture plugin multiplexing feature However this plugin will not be multiplexed if it is run by a version of Vault that does not support multiplexing Vault will simply fall back to a plugin version that it can run Additionally we set the TLSProviderFunc to ensure that our plugin is backwards compatible with versions of Vault that do not support automatic mutual TLS for secure plugin communication vault docs plugins plugin architecture plugin communication If you are certain your plugin does not need backwards compatibility this field can be omitted api addr vault docs configuration api addr Leveraging plugin versioning include plugin versioning mdx Auth and secrets plugins based on framework Backend from the SDK should set the RunningVersion https github com hashicorp vault blob sdk v0 6 0 sdk framework backend go L95 L96 variable and the framework will implement the version interface Database plugins have a smaller API than framework Backend exposes and should instead implement the PluginVersioner https github com hashicorp vault blob sdk v0 6 0 sdk logical logical go L150 L154 interface directly Plugin logging Auth and secrets plugins based on framework Backend from the SDK can take advantage of the SDK s default logger https github com hashicorp vault blob fe55cbbf05586ec4c0cd9bdf865ec6f741a8933c sdk framework backend go L437 No additional setup is required The logger can be used like the following go func b backend example b Logger Trace Trace level log b Logger Debug Debug level log b Logger Info Info level log b Logger Warn Warn level log b Logger Error Error level log See the source code of vault auth plugin example https github com hashicorp vault auth plugin example for a more complete example of a plugin using logging Building a plugin from source To build a plugin from source first navigate to the location holding the desired plugin version Next run go build to obtain a new binary for the plugin Finally register vault docs plugins plugin architecture plugin registration the plugin and enable it Plugin development resources For more information on how to register and enable your plugin refer to the Building Plugin Backends vault tutorials app integration plugin backends tutorial Other HashiCorp plugin development resources vault auth plugin example https github com hashicorp vault auth plugin example Custom Secrets Engines vault tutorials custom secrets engine Plugin development resources community See the Vault Integrations vault integrations page to find Community plugin examples guides developed by community members HashiCorp does not validate these for correctness |
vault page title Add a containerized secrets plugin Add a containerized secrets plugin to Vault layout docs Run your external secrets plugins in containers to increases the isolation Add a containerized secrets plugin to your Vault instance | ---
layout: docs
page_title: Add a containerized secrets plugin
description: >-
Add a containerized secrets plugin to your Vault instance.
---
# Add a containerized secrets plugin to Vault
Run your external secrets plugins in containers to increases the isolation
between the plugin and Vault.
## Before you start
- **Your Vault instance must be running on Linux**.
- **Your Vault instance must have local access to the Docker Engine API**.
Vault uses the [Docker SDK](https://pkg.go.dev/github.com/docker/docker) to
manage containerized plugins.
- **You must have [gVisor](https://gvisor.dev/docs/user_guide/install/)
installed**. Vault uses `runsc` as the entrypoint to your container runtime.
- **If you are using a container runtime other than gVisor, you must have a
`runsc`-compatible container runtime installed**.
## Step 1: Install your container engine
Install one of the supported container engines:
- [Docker](https://docs.docker.com/engine/install/)
- [Rootless Docker](https://docs.docker.com/engine/security/rootless/)
## Step 2: Configure your container runtime
Update your container engine to use `runsc` for Unix sockets between the host
and plugin binary.
<Tabs>
<Tab heading="Docker">
1. Add `runsc` to your
[Docker daemon configuration](https://docs.docker.com/config/daemon):
```shell-session
$ sudo tee PATH_TO_DOCKER_DAEMON_CONFIG_FILE <<EOF
{
"runtimes": {
"runsc": {
"path": "PATH_TO_RUNSC_INSTALLATION",
"runtimeArgs": [
"--host-uds=all"
]
}
}
}
EOF
```
1. Restart Docker:
```shell-session
$ sudo systemctl reload docker
```
</Tab>
<Tab heading="Rootless Docker">
1. Create a configuration directory if it does not exist already:
```shell-session
$ mkdir -p ~/.config/docker
```
1. Add `runsc` to your Docker configuration:
```shell-session
$ tee ~/.config/docker/daemon.json <<EOF
{
"runtimes": {
"runsc": {
"path": "PATH_TO_RUNSC_INSTALLATION",
"runtimeArgs": [
"--host-uds=all"
"--ignore-cgroups"
]
}
}
}
EOF
```
1. Restart Docker:
```shell-session
$ systemctl --user restart docker
```
</Tab>
</Tabs>
## Step 3: Update the HashiCorp `go-plugin` library
You must build your plugin locally with v1.5.0+ of the HashiCorp
[`go-plugin`](https://github.com/hashicorp/go-plugin) library to ensure the
finished binary is compatible with containerization.
Use `go install` to pull the latest version of the plugin library from the
`hashicorp/go-plugin` repo on GitHub:
```shell-session
$ go install github.com/hashicorp/go-plugin@latest
```
<Tip title="The Vault SDK includes go-plugin">
If you build with the Vault SDK, you can update `go-plugin` with `go install`
by pulling the latest SDK version from the `hashicorp/vault` repo:
`go install github.com/hashicorp/vault/sdk@latest`
</Tip>
## Step 4: Build the plugin container
Containerized plugins must run as a binary in the finished container and
behave the same whether run in a container or as a standalone application:
1. Build your plugin binary so it runs on Linux.
1. Create a container file for your plugin with the compiled binary as the
entry-point.
1. Build the image with a unique tag.
For example, to build a containerized version of the built-in key-value (KV)
secrets plugin for Docker:
1. Clone the latest version of the KV secrets plugin from
`hashicorp/vault-plugin-secrets-kv`.
```shell-session
$ git clone https://github.com/hashicorp/vault-plugin-secrets-kv.git
```
1. Build the Go binary for Linux.
```shell-session
$ cd vault-plugin-secrets-kv ; CGO_ENABLED=0 GOOS=linux \
go build -o kv cmd/vault-plugin-secrets-kv/main.go
```
1. Create an empty Dockerfile.
```shell-session
$ touch Dockerfile
```
1. Update the empty `Dockerfile` with your infrastructure build details and the
compiled binary as the entry-point.
```Dockerfile
FROM gcr.io/distroless/static-debian12
COPY kv /bin/kv
ENTRYPOINT [ "/bin/kv" ]
```
1. Build the container image and assign an identifiable tag.
```shell-session
$ docker build -t hashicorp/vault-plugin-secrets-kv:mycontainer .
```
## Step 5: Register the plugin
Registering a containerized plugin with Vault is similar to registering any
other external plugin that is available locally to Vault.
1. Store the SHA256 of the plugin image:
```shell-session
$ export SHA256=$(docker images \
--no-trunc \
--format="" \
YOUR_PLUGIN_IMAGE_TAG | cut -d: -f2)
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ export SHA256=$(docker images \
--no-trunc \
--format="" \
hashicorp/vault-plugin-secrets-kv:mycontainer | cut -d: -f2)
```
</CodeBlockConfig>
1. Register the plugin with `vault plugin register` and specify your plugin
image with the `oci_image` flag:
```shell-session
$ vault plugin register \
-sha256="${SHA256}" \
-oci_image=YOUR_PLUGIN_IMAGE_TAG \
NEW_PLUGIN_TYPE NEW_PLUGIN_ID
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault plugin register \
-sha256="${SHA256}" \
-oci_image=hashicorp/vault-plugin-secrets-kv:mycontainer \
secret my-kv-container
```
</CodeBlockConfig>
1. Enable the new plugin for your Vault instance with `vault secrets enable` and
the new plugin ID:
```shell-session
$ vault secrets enable NEW_PLUGIN_ID
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault secrets enable my-kv-container
```
</CodeBlockConfig>
<Tip title="Customize container behavior with registration flags">
You can provide additional information about the image entrypoint, command,
and environment with the `-command`, `-args`, and `-env` flags for
`vault plugin register`.
</Tip>
## Step 6: Test your plugin
Now that the container is registered with Vault, you should be able to interact
with it like any other plugin. Try writing then fetching a new secret with your
new plugin.
1. Use `vault write` to store a secret with your containerized plugin:
```shell-session
$ vault write NEW_PLUGIN_ID/SECRET_PATH SECRET_KEY=SECRET_VALUE
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault write my-kv-container/testing subject=containers
Success! Data written to: my-kv-container/testing
```
</CodeBlockConfig>
1. Fetch the secret you just wrote:
```shell-session
$ vault read NEW_PLUGIN_ID/SECRET_PATH
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault read my-kv-container/testing
===== Data =====
Key Value
--- -----
subject containers
```
</CodeBlockConfig>
## Use alternative runtimes ((#alt-runtimes))
You can force Vault to use alternative runtimes provided the runtime is
installed locally.
To use an alternative runtime:
1. Register and name the runtime with `vault plugin runtime register`. For
example, to register the default Docker runtime (`runc`) as `docker-rt`:
```shell-session
$ vault plugin runtime register \
-oci_runtime=runc \
-type=container docker-rt
```
1. Use the `--runtime` flag during plugin registration to tell Vault what
runtime to use:
```shell-session
$ vault plugin register \
-runtime=RUNTIME_NAME \
-sha256="${SHA256}" \
-oci_image=YOUR_PLUGIN_IMAGE_TAG \
NEW_PLUGIN_TYPE NEW_PLUGIN_ID
```
For example:
<CodeBlockConfig hideClipboard>
```shell-session
$ vault plugin register \
-runtime=docker-rt \
-sha256="${SHA256}" \
-oci_image=hashicorp/vault-plugin-secrets-kv:mycontainer \
secret my-kv-container
```
</CodeBlockConfig>
## Troubleshooting
### Invalid backend version error
If you run into the following error while registering your plugin:
<CodeBlockConfig hideClipboard>
```plaintext
invalid backend version error: 2 errors occurred:
* error creating container: Error response from daemon: error while looking up the specified runtime path: exec: " /usr/bin/runsc": stat /usr/bin/runsc: no such file or directory
* error creating container: Error response from daemon: error while looking up the specified runtime path: exec: " /usr/bin/runsc": stat /usr/bin/runsc: no such file or directory
```
</CodeBlockConfig>
it means that Vault cannot find the executable for `runsc`. Confirm the
following is true before trying again:
1. You have gVisor installed locally to Vault.
1. The path to `runsc` is correct in you your Docker configuration.
1. Vault has permission to run the `runsc` executable.
If you still get errors when registering a plugin, the recommended workaround is
to use the default Docker runtime (`runc`) as an
[alternative runtime](#alt-runtimes) | vault | layout docs page title Add a containerized secrets plugin description Add a containerized secrets plugin to your Vault instance Add a containerized secrets plugin to Vault Run your external secrets plugins in containers to increases the isolation between the plugin and Vault Before you start Your Vault instance must be running on Linux Your Vault instance must have local access to the Docker Engine API Vault uses the Docker SDK https pkg go dev github com docker docker to manage containerized plugins You must have gVisor https gvisor dev docs user guide install installed Vault uses runsc as the entrypoint to your container runtime If you are using a container runtime other than gVisor you must have a runsc compatible container runtime installed Step 1 Install your container engine Install one of the supported container engines Docker https docs docker com engine install Rootless Docker https docs docker com engine security rootless Step 2 Configure your container runtime Update your container engine to use runsc for Unix sockets between the host and plugin binary Tabs Tab heading Docker 1 Add runsc to your Docker daemon configuration https docs docker com config daemon shell session sudo tee PATH TO DOCKER DAEMON CONFIG FILE EOF runtimes runsc path PATH TO RUNSC INSTALLATION runtimeArgs host uds all EOF 1 Restart Docker shell session sudo systemctl reload docker Tab Tab heading Rootless Docker 1 Create a configuration directory if it does not exist already shell session mkdir p config docker 1 Add runsc to your Docker configuration shell session tee config docker daemon json EOF runtimes runsc path PATH TO RUNSC INSTALLATION runtimeArgs host uds all ignore cgroups EOF 1 Restart Docker shell session systemctl user restart docker Tab Tabs Step 3 Update the HashiCorp go plugin library You must build your plugin locally with v1 5 0 of the HashiCorp go plugin https github com hashicorp go plugin library to ensure the finished binary is compatible with containerization Use go install to pull the latest version of the plugin library from the hashicorp go plugin repo on GitHub shell session go install github com hashicorp go plugin latest Tip title The Vault SDK includes go plugin If you build with the Vault SDK you can update go plugin with go install by pulling the latest SDK version from the hashicorp vault repo go install github com hashicorp vault sdk latest Tip Step 4 Build the plugin container Containerized plugins must run as a binary in the finished container and behave the same whether run in a container or as a standalone application 1 Build your plugin binary so it runs on Linux 1 Create a container file for your plugin with the compiled binary as the entry point 1 Build the image with a unique tag For example to build a containerized version of the built in key value KV secrets plugin for Docker 1 Clone the latest version of the KV secrets plugin from hashicorp vault plugin secrets kv shell session git clone https github com hashicorp vault plugin secrets kv git 1 Build the Go binary for Linux shell session cd vault plugin secrets kv CGO ENABLED 0 GOOS linux go build o kv cmd vault plugin secrets kv main go 1 Create an empty Dockerfile shell session touch Dockerfile 1 Update the empty Dockerfile with your infrastructure build details and the compiled binary as the entry point Dockerfile FROM gcr io distroless static debian12 COPY kv bin kv ENTRYPOINT bin kv 1 Build the container image and assign an identifiable tag shell session docker build t hashicorp vault plugin secrets kv mycontainer Step 5 Register the plugin Registering a containerized plugin with Vault is similar to registering any other external plugin that is available locally to Vault 1 Store the SHA256 of the plugin image shell session export SHA256 docker images no trunc format YOUR PLUGIN IMAGE TAG cut d f2 For example CodeBlockConfig hideClipboard shell session export SHA256 docker images no trunc format hashicorp vault plugin secrets kv mycontainer cut d f2 CodeBlockConfig 1 Register the plugin with vault plugin register and specify your plugin image with the oci image flag shell session vault plugin register sha256 SHA256 oci image YOUR PLUGIN IMAGE TAG NEW PLUGIN TYPE NEW PLUGIN ID For example CodeBlockConfig hideClipboard shell session vault plugin register sha256 SHA256 oci image hashicorp vault plugin secrets kv mycontainer secret my kv container CodeBlockConfig 1 Enable the new plugin for your Vault instance with vault secrets enable and the new plugin ID shell session vault secrets enable NEW PLUGIN ID For example CodeBlockConfig hideClipboard shell session vault secrets enable my kv container CodeBlockConfig Tip title Customize container behavior with registration flags You can provide additional information about the image entrypoint command and environment with the command args and env flags for vault plugin register Tip Step 6 Test your plugin Now that the container is registered with Vault you should be able to interact with it like any other plugin Try writing then fetching a new secret with your new plugin 1 Use vault write to store a secret with your containerized plugin shell session vault write NEW PLUGIN ID SECRET PATH SECRET KEY SECRET VALUE For example CodeBlockConfig hideClipboard shell session vault write my kv container testing subject containers Success Data written to my kv container testing CodeBlockConfig 1 Fetch the secret you just wrote shell session vault read NEW PLUGIN ID SECRET PATH For example CodeBlockConfig hideClipboard shell session vault read my kv container testing Data Key Value subject containers CodeBlockConfig Use alternative runtimes alt runtimes You can force Vault to use alternative runtimes provided the runtime is installed locally To use an alternative runtime 1 Register and name the runtime with vault plugin runtime register For example to register the default Docker runtime runc as docker rt shell session vault plugin runtime register oci runtime runc type container docker rt 1 Use the runtime flag during plugin registration to tell Vault what runtime to use shell session vault plugin register runtime RUNTIME NAME sha256 SHA256 oci image YOUR PLUGIN IMAGE TAG NEW PLUGIN TYPE NEW PLUGIN ID For example CodeBlockConfig hideClipboard shell session vault plugin register runtime docker rt sha256 SHA256 oci image hashicorp vault plugin secrets kv mycontainer secret my kv container CodeBlockConfig Troubleshooting Invalid backend version error If you run into the following error while registering your plugin CodeBlockConfig hideClipboard plaintext invalid backend version error 2 errors occurred error creating container Error response from daemon error while looking up the specified runtime path exec usr bin runsc stat usr bin runsc no such file or directory error creating container Error response from daemon error while looking up the specified runtime path exec usr bin runsc stat usr bin runsc no such file or directory CodeBlockConfig it means that Vault cannot find the executable for runsc Confirm the following is true before trying again 1 You have gVisor installed locally to Vault 1 The path to runsc is correct in you your Docker configuration 1 Vault has permission to run the runsc executable If you still get errors when registering a plugin the recommended workaround is to use the default Docker runtime runc as an alternative runtime alt runtimes |
vault Learn about running external Vault plugins in containers page title Containerized plugins overview Note title Limited OS support Containerized plugins overview layout docs | ---
layout: docs
page_title: Containerized plugins overview
description: Learn about running external Vault plugins in containers.
---
# Containerized plugins overview
<Note title="Limited OS support">
Support for the `container` runtime is currently limited to Linux.
</Note>
Vault has a wide selection of builtin plugins to support integrating with other
systems. For example, you can use plugins to exchange app identity information
with an authentication service to receive a Vault token, or manage database
credentials. You can also register **external** plugins with your Vault instance
to extend the capabilities of your Vault server.
By default, external plugins run as subprocesses that share the user and
environment variables of your Vault instance. Administrators managing Vault
instances on Linux can choose to run external plugins in containers. Running
plugins in containers increases the isolation between individual plugins and
between the plugins and Vault.
## System requirements
- **Your Vault instance must be running on Linux**.
- **Your environment must provide Vault local access to the Docker Engine API**.
Vault uses the [Docker SDK](https://pkg.go.dev/github.com/docker/docker) to
manage containerized plugins.
- **You must have a valid container runtime installed**. We recommend
[installing gVisor](https://gvisor.dev/docs/user_guide/install/) for your
container runtime as Vault specifies the `runsc` runtime by default.
- **You must have all your plugin container images pulled and available locally**.
Vault does not currently support pulling images as part of the plugin
registration process.
## Plugin requirements
All plugins have the following basic requirements to be containerized:
- **Your plugin must be built with at least v1.6.0 of the HashiCorp
[`go-plugin`](https://github.com/hashicorp/go-plugin) library**.
- **The image entrypoint should run the plugin binary**.
Some configurations have additional requirements for the container image, listed
in [supported configurations](#supported-configurations).
## Supported configurations
Vault's containerized plugins are compatible with a variety of configurations.
In particular, it has been tested with the following:
- Default and [rootless](https://docs.docker.com/engine/security/rootless/) Docker.
- OCI-compatible runtimes `runsc` and `runc`.
- Plugin container images running as root and non-root users.
- [Mlock](/vault/docs/configuration#disable_mlock) disabled or enabled.
Not all combinations work and some have additional requirements, listed below.
If you use a configuration that matches multiple headings, you should combine
the requirements from each matching heading.
### `runsc` runtime
- You must pass an additional `--host-uds=create` flag to the `runsc` runtime.
### Rootless Docker with `runsc` runtime
- You must pass an additional `--ignore-cgroups` flag to the `runsc` runtime.
- Cgroup limits are not currently supported for this configuration.
### Rootless Docker with non-root container user
- You must use a container plugin runtime with
[`rootless`](/vault/docs/commands/plugin/runtime/register#rootless) enabled.
- Your filesystem must have Posix 1e ACL support, available by default in most
modern Linux file systems.
- Only supported for gVisor's `runsc` runtime.
### Rootless Docker with mlock enabled
- Only supported for gVisor's `runsc` runtime.
### Non-root container user with mlock enabled
- You must set the `IPC_LOCK` capability on the plugin binary.
## Container lifecycle and metadata
Like any other external plugin, Vault will automatically manage the lifecycle
of plugin containers. If they are killed out of band, Vault will restart them
before servicing any requests that need to be handled by them. Vault will also
[multiplex](/vault/docs/plugins/plugin-architecture#plugin-multiplexing) multiple
mounts to be serviced by the same container if the plugin supports multiplexing.
Vault labels each plugin container with a standard set of metadata to help
identify the owner of the container, including the cluster ID, Vault's own
process ID, and the plugin's name, type, and version.
## Plugin runtimes
Users who require more control over plugin containers can use the "plugin
runtime" APIs for finer grained settings. See the CLI documentation for
[`vault plugin runtime`](/vault/docs/commands/plugin/runtime) for more details. | vault | layout docs page title Containerized plugins overview description Learn about running external Vault plugins in containers Containerized plugins overview Note title Limited OS support Support for the container runtime is currently limited to Linux Note Vault has a wide selection of builtin plugins to support integrating with other systems For example you can use plugins to exchange app identity information with an authentication service to receive a Vault token or manage database credentials You can also register external plugins with your Vault instance to extend the capabilities of your Vault server By default external plugins run as subprocesses that share the user and environment variables of your Vault instance Administrators managing Vault instances on Linux can choose to run external plugins in containers Running plugins in containers increases the isolation between individual plugins and between the plugins and Vault System requirements Your Vault instance must be running on Linux Your environment must provide Vault local access to the Docker Engine API Vault uses the Docker SDK https pkg go dev github com docker docker to manage containerized plugins You must have a valid container runtime installed We recommend installing gVisor https gvisor dev docs user guide install for your container runtime as Vault specifies the runsc runtime by default You must have all your plugin container images pulled and available locally Vault does not currently support pulling images as part of the plugin registration process Plugin requirements All plugins have the following basic requirements to be containerized Your plugin must be built with at least v1 6 0 of the HashiCorp go plugin https github com hashicorp go plugin library The image entrypoint should run the plugin binary Some configurations have additional requirements for the container image listed in supported configurations supported configurations Supported configurations Vault s containerized plugins are compatible with a variety of configurations In particular it has been tested with the following Default and rootless https docs docker com engine security rootless Docker OCI compatible runtimes runsc and runc Plugin container images running as root and non root users Mlock vault docs configuration disable mlock disabled or enabled Not all combinations work and some have additional requirements listed below If you use a configuration that matches multiple headings you should combine the requirements from each matching heading runsc runtime You must pass an additional host uds create flag to the runsc runtime Rootless Docker with runsc runtime You must pass an additional ignore cgroups flag to the runsc runtime Cgroup limits are not currently supported for this configuration Rootless Docker with non root container user You must use a container plugin runtime with rootless vault docs commands plugin runtime register rootless enabled Your filesystem must have Posix 1e ACL support available by default in most modern Linux file systems Only supported for gVisor s runsc runtime Rootless Docker with mlock enabled Only supported for gVisor s runsc runtime Non root container user with mlock enabled You must set the IPC LOCK capability on the plugin binary Container lifecycle and metadata Like any other external plugin Vault will automatically manage the lifecycle of plugin containers If they are killed out of band Vault will restart them before servicing any requests that need to be handled by them Vault will also multiplex vault docs plugins plugin architecture plugin multiplexing multiple mounts to be serviced by the same container if the plugin supports multiplexing Vault labels each plugin container with a standard set of metadata to help identify the owner of the container including the cluster ID Vault s own process ID and the plugin s name type and version Plugin runtimes Users who require more control over plugin containers can use the plugin runtime APIs for finer grained settings See the CLI documentation for vault plugin runtime vault docs commands plugin runtime for more details |
vault page title Server Side Consistent Token FAQ Server side consistent token FAQ This FAQ section contains frequently asked questions about the Server Side Consistent Token feature layout docs An list of frequently asked questions about server side consistent tokens | ---
layout: docs
page_title: Server Side Consistent Token FAQ
description: An list of frequently asked questions about server side consistent tokens
---
# Server side consistent token FAQ
This FAQ section contains frequently asked questions about the Server Side Consistent Token feature.
- [Q: What is the Server Side Consistent Token feature?](#q-what-is-the-server-side-consistent-token-feature)
- [Q: I have Vault Community Edition. How does this feature impact me?](#q-i-have-vault-community-edition-how-does-this-feature-impact-me)
- [Q: What token changes does the Server Side Consistent Tokens feature introduce?](#q-what-token-changes-does-the-server-side-consistent-tokens-feature-introduce)
- [Q: Why are we changing the token?](#q-why-are-we-changing-the-token)
- [Q: What type of tokens are impacted by this feature?](#q-what-type-of-tokens-are-impacted-by-this-feature)
- [Q: Is there a new configuration that this feature introduces?](#q-is-there-a-new-configuration-that-this-feature-introduces)
- [Q: Is there anything else I need to consider to achieve consistency, besides upgrading to Vault 1.10?](#q-is-there-anything-else-i-need-to-consider-to-achieve-consistency-besides-upgrading-to-vault-1-10)
- [Q: What do I need to be paying attention to if I rely on tokens for some of my workflows?](#q-what-do-i-need-to-be-paying-attention-to-if-i-rely-on-tokens-for-some-of-my-workflows)
- [Q: What are the main mitigation options that Vault offers to achieve consistency, and what are the differences between them?](#q-what-are-the-main-mitigation-options-that-vault-offers-to-achieve-consistency-and-what-are-the-differences-between-them)
- [Q: Is this feature something I need with Consul Storage?](#q-is-this-feature-something-i-need-with-consul-storage)
### Q: What is the server side consistent token feature?
~> **Note**: This features requires Vault Enterprise.
Vault has an [eventual consistency](/vault/docs/enterprise/consistency) model where only the leader can write to Vault's storage. When using performance standbys with Integrated Storage, there are sequences of operations that don't always yield read-after-write consistency, which may pose a challenge for some use cases.
Several client-based mitigations were added in Vault version 1.7, which depended on some modifications to clients (provide the appropriate response header per request) so they can specify state. This may not be possible to do in some environments.
To help with such cases, we’ve now added support for the Server Side Consistent Tokens feature in Vault version 1.10. See [Replication](/vault/docs/configuration/replication), [Vault Eventual Consistency](/vault/docs/enterprise/consistency), and [Upgrade to 1.10](/vault/docs/upgrading/upgrade-to-1.10.x).
This feature provides a way for Service tokens, returned from logins (or token create requests), to have the relevant minimum WAL state information embedded within the token itself. Clients can then use this token to authenticate subsequent requests. Thus, clients can obtain read-after-write consistency for the token without typically having to make changes to their code or architecture.
If a performance standby does not have the state required to authenticate the token, it returns a 412 error to allow the client to retry. If client retry is not possible, there is a server config to allow for the consistency.
### Q: I have Vault Community Edition. How does this feature impact me?
For the sake of standardization between Community and Enterprise, and due to the value of adding token prefixes in Vault for token scanning use cases, the token formats are changed across all Vault versions starting Vault 1.10. However, since there are no performance standbys or replication in Vault Community Edition, the new Vault token will always show the local index of the WAL as 0 to indicate there is nothing to wait for.
### Q: What token changes does the Server Side Consistent Tokens feature introduce?
Server Side Consistent Tokens introduces the following key changes:
- Token length : Server side consistent tokens are longer, being 95+ characters as opposed to 27+ characters. Since the token can be subject to change (see [Token](/vault/docs/concepts/tokens)), we recommend that you plan for a maximum length of 255 bytes to future proof yourself if you have workflows that rely on the token size.
- By default, Vault 1.10 will use the new token prefixes and new token format.
- Tokens don't visibly have a ".namespaceID" suffix
- Token prefix: Token prefixes are being changed as follows:
| Token Type | Old Prefix | New Prefix |
| --------------- | ---------- | ---------- |
| Service Tokens | s. | hvs. |
| Batch Tokens | b. | hvb. |
| Recovery Tokens | r. | hvr. |
### Q: Why are we changing the token?
To help with use cases that need read-after-write consistency, the Server Side Consistent Tokens feature provides a way for Service tokens, returned from logins (or token create requests), to embed the relevant information for Vault servers using Integrated Storage to know the minimum WAL index that includes the storage write for the token. This entails changes to the service token format.
The token prefix is being updated to make it easier for static-analysis code scanning tools to scan for Vault tokens, for example, to identify Vault tokens that are accidentally stored in a version control system.
### Q: What type of tokens are impacted by this feature?
With the exception of the prefix changes detailed above that apply to all token types, only Service tokens are impacted by the changes that are introduced by this feature. Other token types such as batch tokens, recovery tokens, or root service tokens are not impacted.
### Q: Is there a new configuration that this feature introduces?
There is a new configuration in the replication section as follows:
```
replication {
allow_forwarding_via_token = "new_token"
}
```
This configuration allows Vault clusters to be configured so that requests made to performance standbys that don’t yet have the most up-to-date WAL index are forwarded to the active node. Please note that there will be extra load on the active node with this type of configuration.
### Q: Is there anything else I need to consider to achieve consistency, besides upgrading to Vault 1.10?
Yes, there are several considerations to keep in mind, and possibly things that may require you to take action, depending on your use case.
- As stated earlier, if a performance standby does not have the state required to authenticate the token, it returns a 412 error allowing the client to retry.
- Ensure that your clients can retry for the best experience.
- Starting with Go api version [1.1.0](https://pkg.go.dev/github.com/hashicorp/vault/[email protected]), the Go client library enables automatic retries for 412 errors. By default, retries=2, or use client method [SetMaxRetries](https://pkg.go.dev/github.com/hashicorp/vault/api#Client.SetMaxRetries). Or, you can use the Vault environment variable [VAULT_MAX_RETRIES](/vault/docs/commands#vault_max_retries) to achieve the same result.
- If you use a client library other than Go, you may still need to ensure that your client can handle 412 retries in order to achieve consistency.
- If your client cannot retry, you can use the Vault server replication configuration `allow_forwarding_via_token` to allow for consistency. As stated earlier, this will incur extra load on the server due to forwarding of requests that don't have the up-to-date WAL-state to the server:
```
replication {
allow_forwarding_via_token = "new_token"
}
```
~> **Note:** If you are generating root tokens or recovery tokens without using the Vault CLI, you will need to modify the OTP length used. refer [here](/vault/docs/upgrading/upgrade-to-1.10.x) for details.
### Q: What do I need to be paying attention to if I rely on tokens for some of my workflows?
Our documentation on [tokens](/vault/docs/concepts/tokens) clearly identifies that the token body itself is subject to change between versions and should not be relied on. We strongly recommend that you consider this while architecting your environment.
However, if you use scripting and tooling to help in the authentication process for Vault-dependent applications, it is important that you take time to understand the changes (see [Replication](/vault/docs/configuration/replication), [Vault Eventual Consistency](/vault/docs/enterprise/consistency), and [Upgrade to 1.10](/vault/docs/upgrading/upgrade-to-1.10.x)), and test these changes in your specific dev environments before deploying this in production.
If your workflow used the embedded NamespaceID suffix, you will need to perform a [token lookup](/vault/docs/commands/token/lookup) because this is currently absent in the new tokens.
### Q: What are the main mitigation options that Vault offers to achieve consistency, and what are the differences between them?
Vault offers the following options to achieve consistency:
- [Client based mitigations](/vault/docs/enterprise/consistency#vault-1-7-mitigations), which was added in Vault Release 1.7, depend on some modifications to clients to include per request header options to ‘always forward the request to the active node’ OR to ‘conditionally forward the request to the active node’ if it would otherwise result in a stale read OR to ‘fail requests’ with error 412 if they might result in a stale read.
- The Vault Agent can also be leveraged for proxied requests to achieve consistency via the above mitigations without client modifications
- Server Side Consistent Tokens, added in Vault version 1.10, provide a more implicit way to achieve consistency, but only addresses consistency for new tokens.
The following table outlines the main differences:
| Client Controlled Consistency | Agent with client controlled consistency settings | Server Side Consistent Tokens |
| ------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ |
| Needs Client side modifications for consistency (per request header options need to be included). | Vault Agent can also be leveraged to achieve consistency without client modifications for proxied requests. | Implicit way for consistency where the relevant minimum WAL state information is embedded within the token itself. |
| Works across clusters too (Performance standbys and Performance Replication) | Single cluster only (Performance Standby) | Single cluster only (Performance Standby) |
| Applies to any Vault operation | Applies to any Vault operation | Applies to login / token create requests only |
| May have performance implications via enforcing too much consistency | May have performance implications, via enforcing too much consistency for proxied requests | May have performance implications if server side configuration to forward requests to active nodes is leveraged. |
~> **Note:** Client controlled consistency headers , if configured, will take precedence over the server configuration.
Finally, when speaking of performance implications above, there are two kinds that you should keep in mind while selecting the best option for your use case:
- Using forwarding will impact horizontal scalability by placing additional load on the active node
- No using forwarding will impact latency of client requests due to retrying until the state is consistency
### Q: Is this feature something I need with Consul Storage?
Consul has a [default consistency model](/consul/api-docs/features/consistency) and this feature is not relevant with Consul storage. | vault | layout docs page title Server Side Consistent Token FAQ description An list of frequently asked questions about server side consistent tokens Server side consistent token FAQ This FAQ section contains frequently asked questions about the Server Side Consistent Token feature Q What is the Server Side Consistent Token feature q what is the server side consistent token feature Q I have Vault Community Edition How does this feature impact me q i have vault community edition how does this feature impact me Q What token changes does the Server Side Consistent Tokens feature introduce q what token changes does the server side consistent tokens feature introduce Q Why are we changing the token q why are we changing the token Q What type of tokens are impacted by this feature q what type of tokens are impacted by this feature Q Is there a new configuration that this feature introduces q is there a new configuration that this feature introduces Q Is there anything else I need to consider to achieve consistency besides upgrading to Vault 1 10 q is there anything else i need to consider to achieve consistency besides upgrading to vault 1 10 Q What do I need to be paying attention to if I rely on tokens for some of my workflows q what do i need to be paying attention to if i rely on tokens for some of my workflows Q What are the main mitigation options that Vault offers to achieve consistency and what are the differences between them q what are the main mitigation options that vault offers to achieve consistency and what are the differences between them Q Is this feature something I need with Consul Storage q is this feature something i need with consul storage Q What is the server side consistent token feature Note This features requires Vault Enterprise Vault has an eventual consistency vault docs enterprise consistency model where only the leader can write to Vault s storage When using performance standbys with Integrated Storage there are sequences of operations that don t always yield read after write consistency which may pose a challenge for some use cases Several client based mitigations were added in Vault version 1 7 which depended on some modifications to clients provide the appropriate response header per request so they can specify state This may not be possible to do in some environments To help with such cases we ve now added support for the Server Side Consistent Tokens feature in Vault version 1 10 See Replication vault docs configuration replication Vault Eventual Consistency vault docs enterprise consistency and Upgrade to 1 10 vault docs upgrading upgrade to 1 10 x This feature provides a way for Service tokens returned from logins or token create requests to have the relevant minimum WAL state information embedded within the token itself Clients can then use this token to authenticate subsequent requests Thus clients can obtain read after write consistency for the token without typically having to make changes to their code or architecture If a performance standby does not have the state required to authenticate the token it returns a 412 error to allow the client to retry If client retry is not possible there is a server config to allow for the consistency Q I have Vault Community Edition How does this feature impact me For the sake of standardization between Community and Enterprise and due to the value of adding token prefixes in Vault for token scanning use cases the token formats are changed across all Vault versions starting Vault 1 10 However since there are no performance standbys or replication in Vault Community Edition the new Vault token will always show the local index of the WAL as 0 to indicate there is nothing to wait for Q What token changes does the Server Side Consistent Tokens feature introduce Server Side Consistent Tokens introduces the following key changes Token length Server side consistent tokens are longer being 95 characters as opposed to 27 characters Since the token can be subject to change see Token vault docs concepts tokens we recommend that you plan for a maximum length of 255 bytes to future proof yourself if you have workflows that rely on the token size By default Vault 1 10 will use the new token prefixes and new token format Tokens don t visibly have a namespaceID suffix Token prefix Token prefixes are being changed as follows Token Type Old Prefix New Prefix Service Tokens s hvs Batch Tokens b hvb Recovery Tokens r hvr Q Why are we changing the token To help with use cases that need read after write consistency the Server Side Consistent Tokens feature provides a way for Service tokens returned from logins or token create requests to embed the relevant information for Vault servers using Integrated Storage to know the minimum WAL index that includes the storage write for the token This entails changes to the service token format The token prefix is being updated to make it easier for static analysis code scanning tools to scan for Vault tokens for example to identify Vault tokens that are accidentally stored in a version control system Q What type of tokens are impacted by this feature With the exception of the prefix changes detailed above that apply to all token types only Service tokens are impacted by the changes that are introduced by this feature Other token types such as batch tokens recovery tokens or root service tokens are not impacted Q Is there a new configuration that this feature introduces There is a new configuration in the replication section as follows replication allow forwarding via token new token This configuration allows Vault clusters to be configured so that requests made to performance standbys that don t yet have the most up to date WAL index are forwarded to the active node Please note that there will be extra load on the active node with this type of configuration Q Is there anything else I need to consider to achieve consistency besides upgrading to Vault 1 10 Yes there are several considerations to keep in mind and possibly things that may require you to take action depending on your use case As stated earlier if a performance standby does not have the state required to authenticate the token it returns a 412 error allowing the client to retry Ensure that your clients can retry for the best experience Starting with Go api version 1 1 0 https pkg go dev github com hashicorp vault api v1 1 0 the Go client library enables automatic retries for 412 errors By default retries 2 or use client method SetMaxRetries https pkg go dev github com hashicorp vault api Client SetMaxRetries Or you can use the Vault environment variable VAULT MAX RETRIES vault docs commands vault max retries to achieve the same result If you use a client library other than Go you may still need to ensure that your client can handle 412 retries in order to achieve consistency If your client cannot retry you can use the Vault server replication configuration allow forwarding via token to allow for consistency As stated earlier this will incur extra load on the server due to forwarding of requests that don t have the up to date WAL state to the server replication allow forwarding via token new token Note If you are generating root tokens or recovery tokens without using the Vault CLI you will need to modify the OTP length used refer here vault docs upgrading upgrade to 1 10 x for details Q What do I need to be paying attention to if I rely on tokens for some of my workflows Our documentation on tokens vault docs concepts tokens clearly identifies that the token body itself is subject to change between versions and should not be relied on We strongly recommend that you consider this while architecting your environment However if you use scripting and tooling to help in the authentication process for Vault dependent applications it is important that you take time to understand the changes see Replication vault docs configuration replication Vault Eventual Consistency vault docs enterprise consistency and Upgrade to 1 10 vault docs upgrading upgrade to 1 10 x and test these changes in your specific dev environments before deploying this in production If your workflow used the embedded NamespaceID suffix you will need to perform a token lookup vault docs commands token lookup because this is currently absent in the new tokens Q What are the main mitigation options that Vault offers to achieve consistency and what are the differences between them Vault offers the following options to achieve consistency Client based mitigations vault docs enterprise consistency vault 1 7 mitigations which was added in Vault Release 1 7 depend on some modifications to clients to include per request header options to always forward the request to the active node OR to conditionally forward the request to the active node if it would otherwise result in a stale read OR to fail requests with error 412 if they might result in a stale read The Vault Agent can also be leveraged for proxied requests to achieve consistency via the above mitigations without client modifications Server Side Consistent Tokens added in Vault version 1 10 provide a more implicit way to achieve consistency but only addresses consistency for new tokens The following table outlines the main differences Client Controlled Consistency Agent with client controlled consistency settings Server Side Consistent Tokens Needs Client side modifications for consistency per request header options need to be included Vault Agent can also be leveraged to achieve consistency without client modifications for proxied requests Implicit way for consistency where the relevant minimum WAL state information is embedded within the token itself Works across clusters too Performance standbys and Performance Replication Single cluster only Performance Standby Single cluster only Performance Standby Applies to any Vault operation Applies to any Vault operation Applies to login token create requests only May have performance implications via enforcing too much consistency May have performance implications via enforcing too much consistency for proxied requests May have performance implications if server side configuration to forward requests to active nodes is leveraged Note Client controlled consistency headers if configured will take precedence over the server configuration Finally when speaking of performance implications above there are two kinds that you should keep in mind while selecting the best option for your use case Using forwarding will impact horizontal scalability by placing additional load on the active node No using forwarding will impact latency of client requests due to retrying until the state is consistency Q Is this feature something I need with Consul Storage Consul has a default consistency model consul api docs features consistency and this feature is not relevant with Consul storage |
vault The Venafi integrated secrets engine for Vault Venafi secrets engine for HashiCorp Vault layout docs The Venafi Machine Identity Secrets Engine provides applications with the page title Venafi Secrets Engines ability to dynamically generate SSL TLS certificates that serve as machine | ---
layout: docs
page_title: Venafi - Secrets Engines
description: The Venafi integrated secrets engine for Vault.
---
# Venafi secrets engine for HashiCorp Vault
The Venafi Machine Identity Secrets Engine provides applications with the
ability to dynamically generate SSL/TLS certificates that serve as machine
identities. Using
[Venafi Trust Protection Platform](https://www.venafi.com/platform/trust-protection-platform)
or [Venafi Cloud](https://www.venafi.com/venaficloud) assures compliance
with enterprise policy and consistency with industry standard trust protection.
Designed for high performance with the same interface as the built-in PKI
secrets engine, services can get certificates without manually generating a
private key and CSR, submitting to a certificate authority, and waiting for a
verification and signing process to complete. Venafi's certificate authority
integrations and policy controls, combined with Vault's built-in authentication
and authorization mechanisms, provide the verification functionality.
Like the built-in PKI secrets engine, short-lived certificates for ephemeral
workloads are the primary focus of the Venafi secrets engine. As such,
revocation is not currently supported.
The Venafi secrets engine makes use of HashiCorp Vault's
[plugin system](/vault/docs/plugins)
and Venafi's [VCert Client SDK](https://github.com/Venafi/vcert). If you have
questions about the Venafi secrets engine, have an issue to report, or have
developed improvements that you want to contribute, visit the
[GitHub](https://github.com/Venafi/vault-pki-backend-venafi) repository.
## Considerations
To successfully deploy this secrets engine, there are some important
considerations. Before using Venafi secrets engine, you should read every
consideration.
### Venafi trust protection platform requirements
Your certificate authority (CA) must be able to issue a certificate in
under one minute. Microsoft Active Directory Certificate Services (ADCS) is a
popular choice. Other CA choices may have slightly different
requirements.
Within Trust Protection Platform, configure these settings. For more
information see the _Venafi Administration Guide_.
- A user account that has an authentication token for the "Venafi Secrets
Engine for HashiCorp Vault" (ID "hashicorp-vault-by-venafi") API Application
as of 20.1 (or scope "certificate:manage" for 19.2 through 19.4) or has been
granted WebSDK Access (deprecated)
- A Policy folder where the user has the following permissions: View, Read,
Write, Create.
- Enterprise compliant policies applied to the folder including:
- Subject DN values for Organizational Unit (OU), Organization (O),
City/Locality (L), State/Province (ST) and Country (C).
- CA Template that Trust Protection Platform will use to enroll general
certificate requests.
- Management Type not locked or locked to 'Enrollment'.
- Certificate Signing Request (CSR) Generation unlocked or not locked to
'Service Generated CSR'.
- Generate Key/CSR on Application not locked or locked to 'No'.
- (Recommended) Disable Automatic Renewal set to 'Yes'.
- (Recommended) Key Bit Strength set to 2048 or higher.
- (Recommended) Domain Whitelisting policy appropriately assigned.
**NOTE**: If you are using Microsoft ACDS, the CRL distribution point and
Authority Information Access (AIA) URIs must start with an HTTP URI
(non-default configuration). If an LDAP URI appears first in the X509v3
extensions, some applications will fail, such as NGINX ingress controllers.
These applications aren't able to retrieve CRL and OCSP information.
#### Trust between Vault and trust protection platform
The Trust Protection Platform REST API (WebSDK) must be secured with a
certificate. Generally, the certificate is issued by a CA that is not publicly
trusted so establishing trust is a critical part of your setup.
Two methods can be used to establish trust. Both require the trust anchor
(root CA certificate) of the WebSDK certificate. If you have administrative
access, you can import the root certificate into the trust store for your
operating system. If you don't have administrative access, or prefer not to
make changes to your system configuration, save the root certificate to a file
in PEM format (e.g. /opt/venafi/bundle.pem) and reference it using the
`trust_bundle_file` parameter whenever you create or update a PKI role in your
Vault.
### Venafi Cloud requirements
If you are using Venafi Cloud, be sure to set up an issuing template, project,
and any other dependencies that appear in the Venafi Cloud documentation.
- Set up an issuing template to link Venafi Cloud to your CA. To learn more,
search for "Issuing Templates" in the
[Venafi Cloud Help system](https://docs.venafi.cloud/help/Default.htm).
- Create a project and zone that identifies the template and other information.
To learn more, search for "Projects" in the
[Venafi Cloud Help system](https://docs.venafi.cloud/help/Default.htm).
## Setup
<Tabs>
<Tab heading="Vault" group="vault">
Before certificates can be issued, you must complete these steps to configure the
Venafi secrets engine:
1. Create the [directory](/vault/docs/plugins/plugin-architecture#plugin-directory)
where your Vault server will look for plugins (e.g. /etc/vault/vault_plugins).
The directory must not be a symbolic link. On macOS, for example, /etc is a
link to /private/etc. To avoid errors, choose an alternative directory such
as /private/etc/vault/vault_plugins.
1. Download the latest `vault-pki-backend-venafi`
[release package](https://github.com/Venafi/vault-pki-backend-venafi/releases/latest)
for your operating system. Unzip the binary to the plugin directory. Note
that the URL for the zip file, referenced below, changes as new versions of the
plugin are released. Replace the version (0.12.0) of the release in the command below to
download the desired version.
```shell-session
$ wget https://github.com/Venafi/vault-pki-backend-venafi/releases/download/v0.12.0/venafi-pki-backend_v0.12.0_darwin.zip
$ unzip venafi-pki-backend_v0.12.0_darwin.zip
$ mv venafi-pki-backend /etc/vault/vault_plugins
```
1. Update the Vault [server configuration](/vault/docs/configuration/)
to specify the plugin directory:
```shell-session
$ plugin_directory = "/etc/vault/vault_plugins"
```
1. Start your Vault using the [server command](/vault/docs/commands/server).
1. Get the SHA-256 checksum of the `venafi-pki-backend` plugin binary:
```shell-session
$ SHA256=$(sha256sum /etc/vault/vault_plugins/venafi-pki-backend| cut -d' ' -f1)
```
1. Register the `venafi-pki-backend` plugin in the Vault
[system catalog](/vault/docs/plugins/plugin-architecture#plugin-catalog):
```shell-session
$ vault write sys/plugins/catalog/secret/venafi-pki-backend \
sha_256="${SHA256}" command="venafi-pki-backend"
```
1. Enable the Venafi secrets engine:
```shell-session
$ vault secrets enable -path=venafi-pki -plugin-name=venafi-pki-backend plugin
```
1. Configure a Venafi secret that maps a name in Vault to connection and authentication
settings for enrolling certificates using Venafi. The zone is a policy folder for Trust
Protection Platform or a DevOps project zone for Venafi Cloud.
Obtain the `access_token` and `refresh_token` for Trust Protection Platform using the
[VCert CLI](https://github.com/Venafi/vcert/blob/master/README-CLI-PLATFORM.md#obtaining-an-authorization-token)
(`getcred` action with `--client-id "hashicorp-vault-by-venafi"` and
`--scope "certificate:manage"`) or the Platform's Authorize REST API method.
To see all options available for venafi secrets, use
`vault path-help venafi-pki/venafi/:name` after creating the secret.
**Trust Protection Platform**:
```shell-session
$ vault write venafi-pki/venafi/tpp \
url="https://tpp.venafi.example" \
access_token="tn1PwE1QTZorXmvnTowSyA==" \
refresh_token="MGxV7DzNnclQi9CkJMCXCg==" \
zone="DevOps\\HashiCorp Vault" \
trust_bundle_file="/path-to/bundle.pem"
```
**Venafi Cloud**:
```shell-session
$ vault write venafi-pki/venafi/cloud \
apikey="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" \
zone="zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz"
```
1. Lastly, configure a [role](/vault/docs/secrets/pki)
that maps a name in Vault to a Venafi secret for enrollment. To see all
options available for roles, including `ttl`, `max_ttl` and `issuer_hint`
(for validity), use `vault path-help venafi-pki/roles/:name` after
creating the role.
**Trust Protection Platform**:
```shell-session
$ vault write venafi-pki/roles/tpp \
venafi_secret=tpp \
store_by=serial store_pkey=true \
allowed_domains=example.com \
allow_subdomains=true
```
**Venafi Cloud**:
```shell-session
$ vault write venafi-pki/roles/cloud \
venafi_secret=cloud \
store_by=serial store_pkey=true \
allowed_domains=example.com \
allow_subdomains=true
```
</Tab>
<Tab heading="HCP Vault Dedicated" group="hcp">
~> The Venafi Secrets Engine on HCP Vault Dedicated currently supports Venafi Cloud or Trust Protection Platform instances secured using a certificate from a publicly trusted CA.
Support for uploading a certificate signed by a private CA using trust_bundle_file parameter is not available on HCP Vault Dedicated and requires running a self-managed Vault to use.
Before certificates can be issued, you must complete these steps to configure the Venafi secrets engine:
1. Navigate to your HCP Vault Dedicated cluster's [Integrations](/hcp/docs/vault/integrations#hashicorp-partner-plugins) page within the HCP portal
to add the Venafi secrets engine to your cluster.
1. After the Venafi plugin has been successfully added to your cluster, you can use the Vault CLI to configure the Venafi secrets engine
for use.
1. Enable the Venafi secrets engine:
```shell-session
$ vault secrets enable -path=venafi-pki -plugin-name=venafi-pki-backend plugin
```
Configure a Venafi secret that maps a name in Vault to connection and authentication
settings for enrolling certificates using Venafi. The zone is a DevOps project zone for Venafi Cloud.
To see all options available for venafi secrets, use
`vault path-help venafi-pki/venafi/:name` after creating the secret.
**Venafi Cloud**:
```shell-session
$ vault write venafi-pki/venafi/cloud \
apikey="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" \
zone="zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz"
```
1. Lastly, configure a [role](/vault/docs/secrets/pki)
that maps a name in Vault to a Venafi secret for enrollment. To see all
options available for roles, including `ttl`, `max_ttl` and `issuer_hint`
(for validity), use `vault path-help venafi-pki/roles/:name` after
creating the role.
**Venafi Cloud**:
```shell-session
$ vault write venafi-pki/roles/cloud \
venafi_secret=cloud \
store_by=serial store_pkey=true \
allowed_domains=example.com \
allow_subdomains=true
```
</Tab>
</Tabs>
## Usage
After the Venafi secrets engine is configured and a user/machine has a Vault
token with the proper permission, it can enroll certificates using Venafi.
To see all of the options available when requesting a certificate, including
`ttl` (for validity), `key_password`, and `custom_fields`, use
`vault path-help venafi-pki/issue/:role-name` and
`vault path-help venafi-pki/sign/:role-name`.
1. Generate a certificate by writing to the `/issue` endpoint with the name of
the role:
**Trust Protection Platform**:
```shell-session
$ vault write venafi-pki/issue/tpp common_name="common-name.example.com" \
alt_names="dns-san-1.example.com,dns-san-2.example.com"
```
**Example output:**
```text
Key Value
--- -----
lease_id venafi-pki/issue/tpp/oLih42SCFzyjntxGc00vqmWH
lease_duration 719h49m55s
lease_renewable false
certificate -----BEGIN CERTIFICATE-----
certificate_chain -----BEGIN CERTIFICATE-----
common_name common-name.example.com
private_key -----BEGIN RSA PRIVATE KEY-----
serial_number 1d:bc:a8:3c:00:00:00:05:5c:e8
```
**Venafi Cloud**:
```shell-session
$ vault write venafi-pki/issue/cloud common_name="common-name.example.com" \
alt_names="dns-san-1.example.com,dns-san-2.example.com"
```
**Example output:**
```text
Key Value
--- -----
lease_id venafi-pki/issue/cloud/1WCNvXKiwboWfRRfjzlPAwEi
lease_duration 167h59m58s
lease_renewable false
certificate -----BEGIN CERTIFICATE-----
certificate_chain -----BEGIN CERTIFICATE-----
common_name common-name.example.com
private_key -----BEGIN RSA PRIVATE KEY-----
serial_number 17:47:8b:13:90:b8:3d:87:b0:dc:b6:9e:00:2b:87:02:c9:d3:1e:8a
```
1. Or sign a CSR from a file by writing to the `/sign` endpoint with the name of
the role:
**Trust Protection Platform**:
```shell-session
$ vault write venafi-pki/sign/tpp [email protected]
```
**Example output:**
```text
Key Value
--- -----
lease_id venafi-pki/sign/tpp/tQq3QNY45e4sJMqTTI9DXEGK
lease_duration 719h49m57s
lease_renewable false
certificate -----BEGIN CERTIFICATE-----
certificate_chain -----BEGIN CERTIFICATE-----
common_name common-name.example.com
serial_number 1d:c4:07:9a:00:00:00:05:5c:ea
```
**Venafi Cloud**:
```shell-session
$ vault write venafi-pki/sign/cloud [email protected]
```
**Example output:**
```text
Key Value
--- -----
lease_id venafi-pki/sign/cloud/fF44FdMAjuCdC29w3Ff81hes
lease_duration 167h59m58s
lease_renewable false
certificate -----BEGIN CERTIFICATE-----
certificate_chain -----BEGIN CERTIFICATE-----
common_name common-name.example.com
serial_number 76:55:e2:14:de:c8:3f:e1:64:4a:fa:37:d4:6e:f5:ef:5e:4c:16:5b
```
## API
Venafi Machine Identity Secrets Engine uses the same
[Vault API](/vault/api-docs/secret/pki)
as the built-in PKI secrets engine. Some methods, such as those for
managing certificate authorities, do not apply. | vault | layout docs page title Venafi Secrets Engines description The Venafi integrated secrets engine for Vault Venafi secrets engine for HashiCorp Vault The Venafi Machine Identity Secrets Engine provides applications with the ability to dynamically generate SSL TLS certificates that serve as machine identities Using Venafi Trust Protection Platform https www venafi com platform trust protection platform or Venafi Cloud https www venafi com venaficloud assures compliance with enterprise policy and consistency with industry standard trust protection Designed for high performance with the same interface as the built in PKI secrets engine services can get certificates without manually generating a private key and CSR submitting to a certificate authority and waiting for a verification and signing process to complete Venafi s certificate authority integrations and policy controls combined with Vault s built in authentication and authorization mechanisms provide the verification functionality Like the built in PKI secrets engine short lived certificates for ephemeral workloads are the primary focus of the Venafi secrets engine As such revocation is not currently supported The Venafi secrets engine makes use of HashiCorp Vault s plugin system vault docs plugins and Venafi s VCert Client SDK https github com Venafi vcert If you have questions about the Venafi secrets engine have an issue to report or have developed improvements that you want to contribute visit the GitHub https github com Venafi vault pki backend venafi repository Considerations To successfully deploy this secrets engine there are some important considerations Before using Venafi secrets engine you should read every consideration Venafi trust protection platform requirements Your certificate authority CA must be able to issue a certificate in under one minute Microsoft Active Directory Certificate Services ADCS is a popular choice Other CA choices may have slightly different requirements Within Trust Protection Platform configure these settings For more information see the Venafi Administration Guide A user account that has an authentication token for the Venafi Secrets Engine for HashiCorp Vault ID hashicorp vault by venafi API Application as of 20 1 or scope certificate manage for 19 2 through 19 4 or has been granted WebSDK Access deprecated A Policy folder where the user has the following permissions View Read Write Create Enterprise compliant policies applied to the folder including Subject DN values for Organizational Unit OU Organization O City Locality L State Province ST and Country C CA Template that Trust Protection Platform will use to enroll general certificate requests Management Type not locked or locked to Enrollment Certificate Signing Request CSR Generation unlocked or not locked to Service Generated CSR Generate Key CSR on Application not locked or locked to No Recommended Disable Automatic Renewal set to Yes Recommended Key Bit Strength set to 2048 or higher Recommended Domain Whitelisting policy appropriately assigned NOTE If you are using Microsoft ACDS the CRL distribution point and Authority Information Access AIA URIs must start with an HTTP URI non default configuration If an LDAP URI appears first in the X509v3 extensions some applications will fail such as NGINX ingress controllers These applications aren t able to retrieve CRL and OCSP information Trust between Vault and trust protection platform The Trust Protection Platform REST API WebSDK must be secured with a certificate Generally the certificate is issued by a CA that is not publicly trusted so establishing trust is a critical part of your setup Two methods can be used to establish trust Both require the trust anchor root CA certificate of the WebSDK certificate If you have administrative access you can import the root certificate into the trust store for your operating system If you don t have administrative access or prefer not to make changes to your system configuration save the root certificate to a file in PEM format e g opt venafi bundle pem and reference it using the trust bundle file parameter whenever you create or update a PKI role in your Vault Venafi Cloud requirements If you are using Venafi Cloud be sure to set up an issuing template project and any other dependencies that appear in the Venafi Cloud documentation Set up an issuing template to link Venafi Cloud to your CA To learn more search for Issuing Templates in the Venafi Cloud Help system https docs venafi cloud help Default htm Create a project and zone that identifies the template and other information To learn more search for Projects in the Venafi Cloud Help system https docs venafi cloud help Default htm Setup Tabs Tab heading Vault group vault Before certificates can be issued you must complete these steps to configure the Venafi secrets engine 1 Create the directory vault docs plugins plugin architecture plugin directory where your Vault server will look for plugins e g etc vault vault plugins The directory must not be a symbolic link On macOS for example etc is a link to private etc To avoid errors choose an alternative directory such as private etc vault vault plugins 1 Download the latest vault pki backend venafi release package https github com Venafi vault pki backend venafi releases latest for your operating system Unzip the binary to the plugin directory Note that the URL for the zip file referenced below changes as new versions of the plugin are released Replace the version 0 12 0 of the release in the command below to download the desired version shell session wget https github com Venafi vault pki backend venafi releases download v0 12 0 venafi pki backend v0 12 0 darwin zip unzip venafi pki backend v0 12 0 darwin zip mv venafi pki backend etc vault vault plugins 1 Update the Vault server configuration vault docs configuration to specify the plugin directory shell session plugin directory etc vault vault plugins 1 Start your Vault using the server command vault docs commands server 1 Get the SHA 256 checksum of the venafi pki backend plugin binary shell session SHA256 sha256sum etc vault vault plugins venafi pki backend cut d f1 1 Register the venafi pki backend plugin in the Vault system catalog vault docs plugins plugin architecture plugin catalog shell session vault write sys plugins catalog secret venafi pki backend sha 256 SHA256 command venafi pki backend 1 Enable the Venafi secrets engine shell session vault secrets enable path venafi pki plugin name venafi pki backend plugin 1 Configure a Venafi secret that maps a name in Vault to connection and authentication settings for enrolling certificates using Venafi The zone is a policy folder for Trust Protection Platform or a DevOps project zone for Venafi Cloud Obtain the access token and refresh token for Trust Protection Platform using the VCert CLI https github com Venafi vcert blob master README CLI PLATFORM md obtaining an authorization token getcred action with client id hashicorp vault by venafi and scope certificate manage or the Platform s Authorize REST API method To see all options available for venafi secrets use vault path help venafi pki venafi name after creating the secret Trust Protection Platform shell session vault write venafi pki venafi tpp url https tpp venafi example access token tn1PwE1QTZorXmvnTowSyA refresh token MGxV7DzNnclQi9CkJMCXCg zone DevOps HashiCorp Vault trust bundle file path to bundle pem Venafi Cloud shell session vault write venafi pki venafi cloud apikey xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxx zone zzzzzzzz zzzz zzzz zzzz zzzzzzzzzzzz 1 Lastly configure a role vault docs secrets pki that maps a name in Vault to a Venafi secret for enrollment To see all options available for roles including ttl max ttl and issuer hint for validity use vault path help venafi pki roles name after creating the role Trust Protection Platform shell session vault write venafi pki roles tpp venafi secret tpp store by serial store pkey true allowed domains example com allow subdomains true Venafi Cloud shell session vault write venafi pki roles cloud venafi secret cloud store by serial store pkey true allowed domains example com allow subdomains true Tab Tab heading HCP Vault Dedicated group hcp The Venafi Secrets Engine on HCP Vault Dedicated currently supports Venafi Cloud or Trust Protection Platform instances secured using a certificate from a publicly trusted CA Support for uploading a certificate signed by a private CA using trust bundle file parameter is not available on HCP Vault Dedicated and requires running a self managed Vault to use Before certificates can be issued you must complete these steps to configure the Venafi secrets engine 1 Navigate to your HCP Vault Dedicated cluster s Integrations hcp docs vault integrations hashicorp partner plugins page within the HCP portal to add the Venafi secrets engine to your cluster 1 After the Venafi plugin has been successfully added to your cluster you can use the Vault CLI to configure the Venafi secrets engine for use 1 Enable the Venafi secrets engine shell session vault secrets enable path venafi pki plugin name venafi pki backend plugin Configure a Venafi secret that maps a name in Vault to connection and authentication settings for enrolling certificates using Venafi The zone is a DevOps project zone for Venafi Cloud To see all options available for venafi secrets use vault path help venafi pki venafi name after creating the secret Venafi Cloud shell session vault write venafi pki venafi cloud apikey xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxx zone zzzzzzzz zzzz zzzz zzzz zzzzzzzzzzzz 1 Lastly configure a role vault docs secrets pki that maps a name in Vault to a Venafi secret for enrollment To see all options available for roles including ttl max ttl and issuer hint for validity use vault path help venafi pki roles name after creating the role Venafi Cloud shell session vault write venafi pki roles cloud venafi secret cloud store by serial store pkey true allowed domains example com allow subdomains true Tab Tabs Usage After the Venafi secrets engine is configured and a user machine has a Vault token with the proper permission it can enroll certificates using Venafi To see all of the options available when requesting a certificate including ttl for validity key password and custom fields use vault path help venafi pki issue role name and vault path help venafi pki sign role name 1 Generate a certificate by writing to the issue endpoint with the name of the role Trust Protection Platform shell session vault write venafi pki issue tpp common name common name example com alt names dns san 1 example com dns san 2 example com Example output text Key Value lease id venafi pki issue tpp oLih42SCFzyjntxGc00vqmWH lease duration 719h49m55s lease renewable false certificate BEGIN CERTIFICATE certificate chain BEGIN CERTIFICATE common name common name example com private key BEGIN RSA PRIVATE KEY serial number 1d bc a8 3c 00 00 00 05 5c e8 Venafi Cloud shell session vault write venafi pki issue cloud common name common name example com alt names dns san 1 example com dns san 2 example com Example output text Key Value lease id venafi pki issue cloud 1WCNvXKiwboWfRRfjzlPAwEi lease duration 167h59m58s lease renewable false certificate BEGIN CERTIFICATE certificate chain BEGIN CERTIFICATE common name common name example com private key BEGIN RSA PRIVATE KEY serial number 17 47 8b 13 90 b8 3d 87 b0 dc b6 9e 00 2b 87 02 c9 d3 1e 8a 1 Or sign a CSR from a file by writing to the sign endpoint with the name of the role Trust Protection Platform shell session vault write venafi pki sign tpp csr example req Example output text Key Value lease id venafi pki sign tpp tQq3QNY45e4sJMqTTI9DXEGK lease duration 719h49m57s lease renewable false certificate BEGIN CERTIFICATE certificate chain BEGIN CERTIFICATE common name common name example com serial number 1d c4 07 9a 00 00 00 05 5c ea Venafi Cloud shell session vault write venafi pki sign cloud csr example req Example output text Key Value lease id venafi pki sign cloud fF44FdMAjuCdC29w3Ff81hes lease duration 167h59m58s lease renewable false certificate BEGIN CERTIFICATE certificate chain BEGIN CERTIFICATE common name common name example com serial number 76 55 e2 14 de c8 3f e1 64 4a fa 37 d4 6e f5 ef 5e 4c 16 5b API Venafi Machine Identity Secrets Engine uses the same Vault API vault api docs secret pki as the built in PKI secrets engine Some methods such as those for managing certificate authorities do not apply |
vault Google Cloud secrets engine The Google Cloud secrets engine for Vault dynamically generates Google Cloud service account keys and OAuth tokens based on IAM policies layout docs page title Google Cloud Secrets Engines | ---
layout: docs
page_title: Google Cloud - Secrets Engines
description: |-
The Google Cloud secrets engine for Vault dynamically generates Google Cloud
service account keys and OAuth tokens based on IAM policies.
---
# Google Cloud secrets engine
The Google Cloud Vault secrets engine dynamically generates Google Cloud service
account keys and OAuth tokens based on IAM policies. This enables users to gain
access to Google Cloud resources without needing to create or manage a dedicated
service account.
The benefits of using this secrets engine to manage Google Cloud IAM service accounts are:
- **Automatic cleanup of GCP IAM service account keys** - each Service Account
key is associated with a Vault lease. When the lease expires (either during
normal revocation or through early revocation), the service account key is
automatically revoked.
- **Quick, short-term access** - users do not need to create new GCP Service
Accounts for short-term or one-off access (such as batch jobs or quick
introspection).
- **Multi-cloud and hybrid cloud applications** - users authenticate to Vault
using a central identity service (such as LDAP) and generate GCP credentials
without the need to create or manage a new Service Account for that user.
~> **NOTE: Deprecation of `access_token` Leases**: In previous versions of this secrets engine
(released with Vault <= 0.11.1), a lease was generated with access tokens. If you're using
an old version of the plugin, please upgrade. Read more in the
[upgrade guide](#deprecation-of-access-token-leases)
## Setup
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. Enable the Google Cloud secrets engine:
```shell-session
$ vault secrets enable gcp
Success! Enabled the gcp secrets engine at: gcp/
```
By default, the secrets engine will mount at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure the secrets engine with account credentials, or leave blank or unwritten
to use Application Default Credentials.
```shell-session
$ vault write gcp/config [email protected]
Success! Data written to: gcp/config
```
If you are running Vault from inside [Google Compute Engine][gce] or [Google
Kubernetes Engine][gke], the instance or pod service account can be used in
place of specifying the credentials JSON file.
For more information on authentication, see the [authentication section](#authentication) below.
In some cases, you cannot set sensitive IAM security credentials in your
Vault configuration. For example, your organization may require that all
security credentials are short-lived or explicitly tied to a machine identity.
To provide IAM security credentials to Vault, we recommend using Vault
[plugin workload identity federation](#plugin-workload-identity-federation-wif)
(WIF) as shown below.
1. Alternatively, configure the audience claim value and the service account email to assume for plugin workload identity federation:
```text
$ vault write gcp/config \
identity_token_audience="<TOKEN AUDIENCE>" \
service_account_email="<SERVICE ACCOUNT EMAIL>"
```
Vault's identity token provider signs the plugin identity token JWT internally.
If a trust relationship exists between Vault and GCP through WIF, the secrets
engine can exchange the Vault identity token for a
[federated access token](https://cloud.google.com/docs/authentication/token-types#access).
To configure a trusted relationship between Vault and GCP:
- You must configure the [identity token issuer backend](/vault/api-docs/secret/identity/tokens#configure-the-identity-tokens-backend)
for Vault.
- GCP must have a
[workload identity pool and provider](https://cloud.google.com/iam/docs/manage-workload-identity-pools-providers)
configured with information about the fully qualified and network-reachable
issuer URL for the Vault plugin's
[identity token provider](/vault/api-docs/secret/identity/tokens#read-plugin-identity-well-known-configurations).
Establishing a trusted relationship between Vault and GCP ensures that GCP
can fetch JWKS
[public keys](/vault/api-docs/secret/identity/tokens#read-active-public-keys)
and verify the plugin identity token signature.
1. Configure rolesets or static accounts. See the relevant sections below.
## Rolesets
A roleset consists of a Vault managed GCP Service account along with a set of IAM bindings
defined for that service account. The name of the service account is generated based on the time
of creation or update. You should not depend on the name of the service account being
fixed and should manage all IAM bindings for the service account through the `bindings` parameter
when creating or updating the roleset.
For more information on the differences between rolesets and static accounts, see the
[things to note](#things-to-note) section below.
### Roleset policy considerations
Starting with Vault 1.8.0, existing permissive policies containing globs
for the GCP Secrets Engine may grant additional privileges due to the introduction
of `/gcp/roleset/:roleset/token` and `/gcp/roleset/:roleset/key` endpoints.
The following policy grants a user the ability to read all rolesets, but would
also allow them to generate tokens and keys. This type of policy is not recommended:
```hcl
# DO NOT USE
path "/gcp/roleset/*" {
capabilities = ["read"]
}
```
The following example demonstrates how a wildcard can instead be used in a roleset policy to
adhere to the principle of least privilege:
```hcl
path "/gcp/roleset/+" {
capabilities = ["read"]
}
```
For more more information on policy syntax, see the
[policy documentation](/vault/docs/concepts/policies#policy-syntax).
### Examples
To configure a roleset that generates OAuth2 access tokens (preferred):
```shell-session
$ vault write gcp/roleset/my-token-roleset \
project="my-project-id" \
secret_type="access_token" \
token_scopes="https://www.googleapis.com/auth/cloud-platform" \
bindings=-<<EOF
resource "//cloudresourcemanager.googleapis.com/projects/my-project-id" {
roles = ["roles/viewer"]
}
EOF
```
To configure a roleset that generates GCP Service Account keys:
```shell-session
$ vault write gcp/roleset/my-key-roleset \
project="my-project" \
secret_type="service_account_key" \
bindings=-<<EOF
resource "//cloudresourcemanager.googleapis.com/projects/my-project" {
roles = ["roles/viewer"]
}
EOF
```
Alternatively, provide a file for the `bindings` argument like so:
```shell-session
$ vault write gcp/roleset/my-roleset
[email protected]
...
```
For more information on role bindings and sample role bindings, please see
the [bindings](#bindings) section below.
For more information on the differences between OAuth2 access tokens and
Service Account keys, see the [things to note](#things-to-note) section
below.
For more information on creating and managing rolesets, see the
[GCP secrets engine API docs][api] docs.
## Static accounts
Static accounts are GCP service accounts that are created outside of Vault and then provided to
Vault to generate access tokens or keys. You can also use Vault to optionally manage IAM bindings
for the service account.
For more information on the differences between rolesets and static accounts, see the
[things to note](#things-to-note) section below.
### Examples
Before configuring a static account, you need to create a
[Google Cloud Service Account][service-accounts]. Take note of the email address of the service
account you have created. Service account emails are of the format
`<service-account-id>@<project-id>.iam.gserviceaccount.com`.
To configure a static account that generates OAuth2 access tokens (preferred):
```shell-session
$ vault write gcp/static-account/my-token-account \
service_account_email="[email protected]" \
secret_type="access_token" \
token_scopes="https://www.googleapis.com/auth/cloud-platform" \
bindings=-<<EOF
resource "//cloudresourcemanager.googleapis.com/projects/my-project" {
roles = ["roles/viewer"]
}
EOF
```
To configure a static account that generates GCP Service Account keys:
```shell-session
$ vault write gcp/static-account/my-key-account \
service_account_email="[email protected]" \
secret_type="service_account_key" \
bindings=-<<EOF
resource "//cloudresourcemanager.googleapis.com/projects/my-project" {
roles = ["roles/viewer"]
}
EOF
```
Alternatively, provide a file for the `bindings` argument like so:
```shell-session
$ vault write gcp/static-account/my-account
[email protected]
...
```
For more information on role bindings and sample role bindings, please see
the [bindings](#bindings) section below.
For more information on the differences between OAuth2 access tokens and
Service Account keys, see the [things to note](#things-to-note) section
below.
For more information on creating and managing static accounts, see the
[GCP secrets engine API docs][api] docs.
## Impersonated accounts
Impersonated accounts are a way to generate an OAuth2 [access token](/vault/docs/secrets/gcp#access-tokens) that is granted
the permissions and accesses of another given service account. These access
tokens do not have the same 10-key limit as service account keys do, yet they
retain their short-lived nature. By default, their TTL in GCP is 1 hour, but
this may be configured to be up to 12 hours as explained in Google's
[short-lived credentials documentation](https://cloud.google.com/iam/docs/create-short-lived-credentials-delegated#sa-credentials-oauth).
For more information regarding service account impersonation in GCP, consider starting
with their documentation [available here](https://cloud.google.com/iam/docs/impersonating-service-accounts).
### Examples
To configure a Vault role that impersonates the administrator on the Google
Cloud project with the cloud platform and compute scopes:
```shell-session
$ vault write gcp/impersonated-account/my-token-impersonate \
service_account_email="[email protected]" \
token_scopes="https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/compute" \
ttl="6h"
```
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can generate credentials. Depending on how the Vault role
was configured, you can generate OAuth2 tokens or service account keys.
### Access tokens
To generate OAuth2 [access tokens](https://cloud.google.com/docs/authentication/token-types#access),
read from the [`gcp/.../token`](/vault/api-docs/secret/gcp#generate-secret-iam-service-account-creds-oauth2-access-token)
API. If using a roleset or static account, it must have been created with a
[`secret_type`](/vault/api-docs/secret/gcp#secret_type) of `access_token`. Impersonated accounts will
generate OAuth2 tokens by default.
**Roleset:**
```shell-session
$ vault read gcp/roleset/my-token-roleset/token
Key Value
--- -----
expires_at_seconds 1537402548
token ya29.c.ElodBmNPwHUNY5gcBpnXcE4ywG4w1k...
token_ttl 3599
```
**Static account:**
```shell-session
$ vault read gcp/static-account/my-token-account/token
Key Value
--- -----
expires_at_seconds 1672231587
token ya29.c.b0Aa9VdykAdYoW9S1ImtPZykF_oTi9...
token_ttl 3599
```
**Impersonated account:**
```shell-session
$ vault read gcp/impersonated-account/my-token-impersonate/token
Key Value
--- -----
expires_at_seconds 1671667844
token ya29.c.b0AT7lpjBRmO7ghBEyMV18evd016hq...
token_ttl 59m59s
```
This endpoint generates a non-renewable, non-revocable static OAuth2 access token
with a max lifetime of one hour, where `token_ttl` is given in seconds and the
`expires_at_seconds` is the expiry time for the token, given as a Unix timestamp.
The `token` value then can be used as a HTTP Authorization Bearer token in requests
to GCP APIs:
```shell-session
$ curl -H "Authorization: Bearer ya29.c.ElodBmNPwHUNY5gcBpnXcE4ywG4w1k..."
```
### Service account keys
To generate service account keys, read from `gcp/.../key`. Vault returns the service
account key data as a base64-encoded string in the `private_key_data` field. This can
be read by decoding it using `base64 --decode "ewogICJ0e..."` or another base64 tool of
your choice. The roleset or static account must have been created as type `service_account_key`:
```shell-session
$ vault read gcp/roleset/my-key-roleset/key
Key Value
--- -----
lease_id gcp/key/my-key-roleset/ce563a99-5e55-389b...
lease_duration 30m
lease_renewable true
key_algorithm KEY_ALG_RSA_2048
key_type TYPE_GOOGLE_CREDENTIALS_FILE
private_key_data ewogICJ0eXBlIjogInNlcnZpY2VfYWNjb3VudCIsC...
```
This endpoint generates a new [GCP IAM service account key][iam-keys] associated
with the role's Service Account. When the lease expires (or is revoked
early), the Service Account key will be deleted.
**There is a default limit of 10 keys per Service Account.** For more
information on this limit and recommended mitigation, please see the [things to
note](#things-to-note) section below.
## Bindings
Roleset or static account bindings define a list of resources and the associated IAM roles on that
resource. Bindings are used as the `binding` argument when creating or
updating a roleset or static account and are specified in the following format using HCL:
```hcl
resource NAME {
roles = [ROLE, [ROLE...]]
}
```
For example:
```hcl
resource "buckets/my-bucket" {
roles = [
"roles/storage.objectAdmin",
"roles/storage.legacyBucketReader",
]
}
# At instance level, using self-link
resource "https://www.googleapis.com/compute/v1/projects/my-project/zone/my-zone/instances/my-instance" {
roles = [
"roles/compute.instanceAdmin.v1"
]
}
# At project level
resource "//cloudresourcemanager.googleapis.com/projects/my-project" {
roles = [
"roles/compute.instanceAdmin.v1",
"roles/iam.serviceAccountUser", # required if managing instances that run as service accounts
]
}
# At folder level
resource "//cloudresourcemanager.googleapis.com/folders/123456" {
roles = [
"roles/compute.viewer",
"roles/deploymentmanager.viewer",
]
}
```
The top-level `resource` block defines the resource or resource path for which
IAM policy information will be bound. The resource path may be specified in a
few different formats:
- **Project-level self-link** - a URI with scheme and host, generally
corresponding to the `self_link` attribute of a resource in GCP. This must
include the resource nested in the parent project.
```text
# compute alpha zone
https://www.googleapis.com/compute/alpha/projects/my-project/zones/us-central1-c
```
- **Full resource name** - a schema-less URI consisting of a DNS-compatible API
service name and resource path. See the [full resource name API
documentation][resource-name-full] for more information.
```text
# Compute snapshot
//compute.googleapis.com/project/my-project/snapshots/my-compute-snapshot
# Pubsub snapshot
//pubsub.googleapis.com/project/my-project/snapshots/my-pubsub-snapshot
# BigQuery dataset
//bigquery.googleapis.com/projects/my-project/datasets/mydataset
# Resource manager
//cloudresourcemanager.googleapis.com/projects/my-project"
```
- **Relative resource name** - A path-noscheme URI path, usually as accepted by
the API. Use this if the version or service are apparent from the resource
type. Please see the [relative resource name API
documentation][resource-name-relative] for more information.
```text
# Storage bucket objects
buckets/my-bucket
buckets/my-bucket/objects/my-object
# PubSub topics
projects/my-project/topics/my-pubsub-topic
```
The nested `roles` attribute is an array of strings names of [GCP IAM
roles][iam-roles]. The roles may be specified in the following formats:
- **Global role name** - these are global roles built into Google Cloud. For the
full list of available roles, please see the [list of predefined GCP
roles][predefined-roles].
```text
roles/viewer
roles/bigquery.user
roles/billing.admin
```
- **Organization-level custom role** - these are roles that are created at the
organization level by organization owners.
```text
organizations/my-organization/roles/my-custom-role
```
For more information, please see the documentation on [GCP custom
roles][custom-roles].
- **Project-level custom role** - these are roles that are created at a
per-project level by project owners.
```text
projects/my-project/roles/my-custom-role
```
For more information, please see the documentation on [GCP custom
roles][custom-roles].
## Authentication
The Google Cloud Vault secrets backend uses the official Google Cloud Golang
SDK. This means it supports the common ways of [providing credentials to Google
Cloud][cloud-creds]. In addition to specifying `credentials` directly via Vault
configuration, you can also get configuration from the following values **on the
Vault server**:
1. The `GOOGLE_APPLICATION_CREDENTIALS` environment variable. This is specified
as the **path** to a Google Cloud credentials file, typically for a service
account. If this environment variable is present, the resulting credentials are
used. If the credentials are invalid, an error is returned.
1. The identity of a Google Cloud [workload][workloads-ids]. When Vault server is running
on a Google workload like [Google Compute Engine][gce] or [Google Kubernetes Engine][gke],
identity associated with the workload is automatically used. To configure Google Compute
Engine with an identity, see [attached service accounts][attached-service-accounts]. To
configure Google Kubernetes Engine with an identity, see [GKE workload identity][gke-workload-ids].
For more information on service accounts, please see the [Google Cloud Service
Accounts documentation][service-accounts].
To use this secrets engine, the service account must have the following
minimum scope(s):
```text
https://www.googleapis.com/auth/cloud-platform
```
### Required permissions
The credentials given to Vault must have the following permissions when using rolesets at the
project level:
```text
# Service account + key admin
iam.serviceAccounts.create
iam.serviceAccounts.delete
iam.serviceAccounts.get
iam.serviceAccounts.list
iam.serviceAccounts.update
iam.serviceAccountKeys.create
iam.serviceAccountKeys.delete
iam.serviceAccountKeys.get
iam.serviceAccountKeys.list
```
When using static accounts or impersonated accounts, Vault must have the following permissions
at the service account level:
```text
# For `access_token` secrets and impersonated accounts
iam.serviceAccounts.getAccessToken
# For `service_account_keys` secrets
iam.serviceAccountKeys.create
iam.serviceAccountKeys.delete
iam.serviceAccountKeys.get
iam.serviceAccountKeys.list
```
When using rolesets or static accounts with bindings, Vault must have the following permissions:
```text
# IAM policy changes
<service>.<resource>.getIamPolicy
<service>.<resource>.setIamPolicy
```
where `<service>` and `<resource>` correspond to permissions which will be
granted, for example:
```text
# Projects
resourcemanager.projects.getIamPolicy
resourcemanager.projects.setIamPolicy
# All compute
compute.*.getIamPolicy
compute.*.setIamPolicy
# BigQuery datasets
bigquery.datasets.get
bigquery.datasets.update
```
You can either:
- Create a [custom role][custom-roles] using these permissions, and assign this
role at a project-level
- Assign the set of roles required to get resource-specific
`getIamPolicy/setIamPolicy` permissions. At a minimum you will need to assign
`roles/iam.serviceAccountAdmin` and `roles/iam.serviceAccountKeyAdmin` so
Vault can manage service accounts and keys.
- Notice that BigQuery requires different permissions than other resource. This is
because BigQuery currently uses legacy ACL instead of traditional IAM permissions.
This means to update access on the dataset, Vault must be able to update the dataset's
metadata.
## Plugin Workload Identity Federation (WIF)
<EnterpriseAlert product="vault" />
The GCP secrets engine supports the plugin WIF workflow and has a source of identity called
a plugin identity token. The plugin identity token is a JWT that is signed internally by Vault's
[plugin identity token issuer](/vault/api-docs/secret/identity/tokens#read-plugin-workload-identity-issuer-s-openid-configuration).
If there is a trust relationship configured between Vault and GCP through
[workload identity federation](https://cloud.google.com/iam/docs/workload-identity-federation),
the secrets engine can exchange its identity token for short-lived access tokens needed to
perform its actions.
Exchanging identity tokens for access tokens lets the GCP secrets engine
operate without configuring explicit access to sensitive IAM security
credentials.
To configure the secrets engine to use plugin WIF:
1. Ensure that Vault [openid-configuration](/vault/api-docs/secret/identity/tokens#read-plugin-identity-token-issuer-s-openid-configuration)
and [public JWKS](/vault/api-docs/secret/identity/tokens#read-plugin-identity-token-issuer-s-public-jwks)
APIs are network-reachable by GCP. We recommend using an API proxy or gateway
if you need to limit Vault API exposure.
1. Create a
[workload identity pool and provider](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#create-pool-provider)
in GCP.
1. The provider URL **must** point at your [Vault plugin identity token issuer](/vault/api-docs/secret/identity/tokens#read-plugin-workload-identity-issuer-s-openid-configuration) with the
`/.well-known/openid-configuration` suffix removed. For example:
`https://host:port/v1/identity/oidc/plugins`.
1. Uniquely identify the recipient of the plugin identity token as the audience.
You can use the [default audience](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#prepare)
for the identity pool or a custom value less than 256 characters.
1. [Authenticate a workload](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#authenticate)
in GCP by granting the identity pool access to a dedicated service account using service account impersonation.
Filter requests using the unique `sub` claim issued by plugin identity tokens so the GCP Auth engine can
impersonate the service account. `sub` claims have the form: `plugin-identity:<NAMESPACE>:secret:<GCP_SECRETS_MOUNT_ACCESSOR>`.
1. Configure the GCP secrets engine with the OIDC audience value and service account
email.
```shell-session
$ vault write gcp/config \
identity_token_audience="//iam.googleapis.com/projects/410449834127/locations/global/workloadIdentityPools/vault-gcp-secrets-43777a63/providers/vault-gcp-secrets-wif-provider" \
service_account_email="vault-plugin-wif-secrets@hc-b712f250b4e04cacbadd258a90b.iam.gserviceaccount.com"
```
Your secrets engine can now use plugin WIF for its configuration credentials.
By default, WIF [credentials](https://cloud.google.com/iam/docs/workload-identity-federation#access_management)
have a time-to-live of 1 hour and automatically refresh when they expire.
Please see the [API documentation](/vault/api-docs/secret/gcp#write-config)
for more details on the fields associated with plugin WIF.
### Root credential rotation
If the mount is configured with credentials directly, the credential's key may be
rotated to a Vault-generated value that is not accessible by the operator. For more
details on this operation, please see the
[Root Credential Rotation](/vault/api-docs/secret/gcp#rotate-root-credentials) API docs.
## Things to note
### Rolesets vs. static accounts
Advantages of rolesets:
- Service accounts and IAM bindings are fully managed by Vault
Disadvantages of rolesets:
- Cannot easily decouple IAM bindings from the ones managed in Vault
- Vault requires permissions to manage IAM bindings and service accounts
Advantages of static accounts:
- Can manage IAM bindings independently from the ones managed in Vault
- Vault does not require permissions to IAM bindings and service accounts and only permissions
related to the keys of the service account
Disadvantages of static accounts:
- Self management of service accounts is necessary.
### Access tokens vs. service account keys
Advantages of `access_tokens`:
- Can generate infinite number of tokens per roleset
Disadvantages of `access_tokens`:
- Cannot be used with some client libraries or tools
- Have a static life-time of 1 hr that cannot be modified, revoked, or extended.
Advantages of `service_account_keys`:
- Controllable life-time through Vault, allowing for longer access
- Can be used by all normal GCP tooling
Disadvantages of `service_account_keys`:
- Infinite lifetime in GCP (i.e. if they are not managed properly, leaked keys can live forever)
- Limited to 10 per roleset/service account.
When generating OAuth access tokens, Vault will still
generate a dedicated service account and key. This private key is stored in Vault
and is never accessible to other users, and the underlying key can
be rotated. See the [GCP API documentation][api] for more information on
rotation.
### Service accounts are tied to rolesets
Service Accounts are created when the roleset is created (or updated) rather
than each time a secret is generated. This may be different from how other
secrets engines behave, but it is for good reasons:
- IAM Service Account creation and permission propagation can take up to 60
seconds to complete. By creating the Service Account in advance, we speed up
the timeliness of future operations and reduce the flakiness of automated
workflows.
- Each GCP project has a limit on the number of IAM Service Accounts. You can
[request additional quota][quotas]. The quota increase is processed by humans,
so it is best to request this additional quota in advance. This limit is
currently 100, **including system-managed Service Accounts**. If Service
Accounts were created per secret, this quota limit would reduce the number of
secrets that can be generated.
### Service account keys quota limits
GCP IAM has a hard limit (currently 10) on the number of Service Account keys.
Attempts to generate more keys will result in an error. If you find yourself
running into this limit, consider the following:
- Have shorter TTLs or revoke access earlier. If you are not using past Service
Account keys, consider rotating and freeing quota earlier.
- Create additional rolesets which share the same set of permissions. Additional
rolesets can be created with the same set of permissions. This will create a
new service account and increases the number of keys you can create.
- Where possible, use OAuth2 access tokens instead of Service Account keys.
### Resources in IAM bindings must exist at roleset or static account creation
Because the bindings for the Service Account are set during roleset/static account creation,
resources that do not exist will fail the `getIamPolicy` API call.
### Roleset creation may partially fail
Every Service Account creation, key creation, and IAM policy change is a GCP API
call per resource. If an API call to one of these resources fails, the roleset
creation fails and Vault will attempt to rollback.
These rollbacks are API calls, so they may also fail. The secrets engine uses a
WAL to ensure that unused bindings are cleaned up. In the case of quota limits,
you may need to clean these up manually.
### Do not modify vault-owned IAM accounts
While Vault will initially create and assign permissions to IAM service
accounts, it is possible that an external user deletes or modifies this service
account. These changes are difficult to detect, and it is best to prevent this
type of modification through IAM permissions.
Vault roleset Service Accounts will have emails in the format:
```
vault<roleset-prefix>-<creation-unix-timestamp>@...
```
Communicate with your teams (or use IAM permissions) to not modify these
resources.
## Help & support
The Google Cloud Vault secrets engine is written as an external Vault plugin and
thus exists outside the main Vault repository. It is automatically bundled with
Vault releases, but the code is managed separately.
Please report issues, add feature requests, and submit contributions to the
[vault-plugin-secrets-gcp repo on GitHub][repo].
## API
The GCP secrets engine has a full HTTP API. Please see the [GCP secrets engine API docs][api]
for more details.
[api]: /vault/api-docs/secret/gcp
[cloud-creds]: https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application
[custom-roles]: https://cloud.google.com/iam/docs/creating-custom-roles
[gce]: https://cloud.google.com/compute/
[gke]: https://cloud.google.com/kubernetes-engine/
[iam-keys]: https://cloud.google.com/iam/docs/service-accounts#service_account_keys
[iam-roles]: https://cloud.google.com/iam/docs/understanding-roles
[predefined-roles]: https://cloud.google.com/iam/docs/understanding-roles#predefined_roles
[repo]: https://github.com/hashicorp/vault-plugin-secrets-gcp
[resource-name-full]: https://cloud.google.com/apis/design/resource_names#full_resource_name
[resource-name-relative]: https://cloud.google.com/apis/design/resource_names#relative_resource_name
[quotas]: https://cloud.google.com/compute/quotas
[service-accounts]: https://cloud.google.com/compute/docs/access/service-accounts
[workloads-ids]: https://cloud.google.com/iam/docs/workload-identities
[attached-service-accounts]: https://cloud.google.com/iam/docs/workload-identities#attached-service-accounts
[gke-workload-ids]: https://cloud.google.com/iam/docs/workload-identities#kubernetes-workload-identity
## Upgrade guides
### Deprecation of access token leases
~> **NOTE**: This deprecation only affects access tokens. There is no change to the `service_account_key` secret type.
Previous versions of this secrets engine (Vault <= 0.11.1) created a lease for
each access token secret. We have removed them after discovering that these
tokens, specifically Google OAuth2 tokens for IAM service accounts, are
non-revocable and have a static 60 minute lifetime. To match the current
limitations of the GCP APIs, the secrets engine will no longer allow for
revocation or manage the token TTL - more specifically, **the access_token
response will no longer include `lease_id` or other lease information**. This
change does not reflect any change to the actual underlying OAuth tokens or GCP
service accounts.
To upgrade:
- Remove references from `lease_id`, `lease_duration` or other `lease_*`
attributes when reading responses for the access tokens secrets endpoint (i.e.
from `gcp/token/$roleset`). See the [documentation for access
tokens](#access-tokens) to see the new format for the response.
- Be aware of leftover leases from previous versions. While these old leases
will still be revocable, they will not actually invalidate their associated
access token, and that token will still be useable for up to one hour. | vault | layout docs page title Google Cloud Secrets Engines description The Google Cloud secrets engine for Vault dynamically generates Google Cloud service account keys and OAuth tokens based on IAM policies Google Cloud secrets engine The Google Cloud Vault secrets engine dynamically generates Google Cloud service account keys and OAuth tokens based on IAM policies This enables users to gain access to Google Cloud resources without needing to create or manage a dedicated service account The benefits of using this secrets engine to manage Google Cloud IAM service accounts are Automatic cleanup of GCP IAM service account keys each Service Account key is associated with a Vault lease When the lease expires either during normal revocation or through early revocation the service account key is automatically revoked Quick short term access users do not need to create new GCP Service Accounts for short term or one off access such as batch jobs or quick introspection Multi cloud and hybrid cloud applications users authenticate to Vault using a central identity service such as LDAP and generate GCP credentials without the need to create or manage a new Service Account for that user NOTE Deprecation of access token Leases In previous versions of this secrets engine released with Vault 0 11 1 a lease was generated with access tokens If you re using an old version of the plugin please upgrade Read more in the upgrade guide deprecation of access token leases Setup Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the Google Cloud secrets engine shell session vault secrets enable gcp Success Enabled the gcp secrets engine at gcp By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure the secrets engine with account credentials or leave blank or unwritten to use Application Default Credentials shell session vault write gcp config credentials my credentials json Success Data written to gcp config If you are running Vault from inside Google Compute Engine gce or Google Kubernetes Engine gke the instance or pod service account can be used in place of specifying the credentials JSON file For more information on authentication see the authentication section authentication below In some cases you cannot set sensitive IAM security credentials in your Vault configuration For example your organization may require that all security credentials are short lived or explicitly tied to a machine identity To provide IAM security credentials to Vault we recommend using Vault plugin workload identity federation plugin workload identity federation wif WIF as shown below 1 Alternatively configure the audience claim value and the service account email to assume for plugin workload identity federation text vault write gcp config identity token audience TOKEN AUDIENCE service account email SERVICE ACCOUNT EMAIL Vault s identity token provider signs the plugin identity token JWT internally If a trust relationship exists between Vault and GCP through WIF the secrets engine can exchange the Vault identity token for a federated access token https cloud google com docs authentication token types access To configure a trusted relationship between Vault and GCP You must configure the identity token issuer backend vault api docs secret identity tokens configure the identity tokens backend for Vault GCP must have a workload identity pool and provider https cloud google com iam docs manage workload identity pools providers configured with information about the fully qualified and network reachable issuer URL for the Vault plugin s identity token provider vault api docs secret identity tokens read plugin identity well known configurations Establishing a trusted relationship between Vault and GCP ensures that GCP can fetch JWKS public keys vault api docs secret identity tokens read active public keys and verify the plugin identity token signature 1 Configure rolesets or static accounts See the relevant sections below Rolesets A roleset consists of a Vault managed GCP Service account along with a set of IAM bindings defined for that service account The name of the service account is generated based on the time of creation or update You should not depend on the name of the service account being fixed and should manage all IAM bindings for the service account through the bindings parameter when creating or updating the roleset For more information on the differences between rolesets and static accounts see the things to note things to note section below Roleset policy considerations Starting with Vault 1 8 0 existing permissive policies containing globs for the GCP Secrets Engine may grant additional privileges due to the introduction of gcp roleset roleset token and gcp roleset roleset key endpoints The following policy grants a user the ability to read all rolesets but would also allow them to generate tokens and keys This type of policy is not recommended hcl DO NOT USE path gcp roleset capabilities read The following example demonstrates how a wildcard can instead be used in a roleset policy to adhere to the principle of least privilege hcl path gcp roleset capabilities read For more more information on policy syntax see the policy documentation vault docs concepts policies policy syntax Examples To configure a roleset that generates OAuth2 access tokens preferred shell session vault write gcp roleset my token roleset project my project id secret type access token token scopes https www googleapis com auth cloud platform bindings EOF resource cloudresourcemanager googleapis com projects my project id roles roles viewer EOF To configure a roleset that generates GCP Service Account keys shell session vault write gcp roleset my key roleset project my project secret type service account key bindings EOF resource cloudresourcemanager googleapis com projects my project roles roles viewer EOF Alternatively provide a file for the bindings argument like so shell session vault write gcp roleset my roleset bindings mybindings hcl For more information on role bindings and sample role bindings please see the bindings bindings section below For more information on the differences between OAuth2 access tokens and Service Account keys see the things to note things to note section below For more information on creating and managing rolesets see the GCP secrets engine API docs api docs Static accounts Static accounts are GCP service accounts that are created outside of Vault and then provided to Vault to generate access tokens or keys You can also use Vault to optionally manage IAM bindings for the service account For more information on the differences between rolesets and static accounts see the things to note things to note section below Examples Before configuring a static account you need to create a Google Cloud Service Account service accounts Take note of the email address of the service account you have created Service account emails are of the format service account id project id iam gserviceaccount com To configure a static account that generates OAuth2 access tokens preferred shell session vault write gcp static account my token account service account email account my project iam gserviceaccount com secret type access token token scopes https www googleapis com auth cloud platform bindings EOF resource cloudresourcemanager googleapis com projects my project roles roles viewer EOF To configure a static account that generates GCP Service Account keys shell session vault write gcp static account my key account service account email account my project iam gserviceaccount com secret type service account key bindings EOF resource cloudresourcemanager googleapis com projects my project roles roles viewer EOF Alternatively provide a file for the bindings argument like so shell session vault write gcp static account my account bindings mybindings hcl For more information on role bindings and sample role bindings please see the bindings bindings section below For more information on the differences between OAuth2 access tokens and Service Account keys see the things to note things to note section below For more information on creating and managing static accounts see the GCP secrets engine API docs api docs Impersonated accounts Impersonated accounts are a way to generate an OAuth2 access token vault docs secrets gcp access tokens that is granted the permissions and accesses of another given service account These access tokens do not have the same 10 key limit as service account keys do yet they retain their short lived nature By default their TTL in GCP is 1 hour but this may be configured to be up to 12 hours as explained in Google s short lived credentials documentation https cloud google com iam docs create short lived credentials delegated sa credentials oauth For more information regarding service account impersonation in GCP consider starting with their documentation available here https cloud google com iam docs impersonating service accounts Examples To configure a Vault role that impersonates the administrator on the Google Cloud project with the cloud platform and compute scopes shell session vault write gcp impersonated account my token impersonate service account email projectAdmin my project iam gserviceaccount com token scopes https www googleapis com auth cloud platform https www googleapis com auth compute ttl 6h Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can generate credentials Depending on how the Vault role was configured you can generate OAuth2 tokens or service account keys Access tokens To generate OAuth2 access tokens https cloud google com docs authentication token types access read from the gcp token vault api docs secret gcp generate secret iam service account creds oauth2 access token API If using a roleset or static account it must have been created with a secret type vault api docs secret gcp secret type of access token Impersonated accounts will generate OAuth2 tokens by default Roleset shell session vault read gcp roleset my token roleset token Key Value expires at seconds 1537402548 token ya29 c ElodBmNPwHUNY5gcBpnXcE4ywG4w1k token ttl 3599 Static account shell session vault read gcp static account my token account token Key Value expires at seconds 1672231587 token ya29 c b0Aa9VdykAdYoW9S1ImtPZykF oTi9 token ttl 3599 Impersonated account shell session vault read gcp impersonated account my token impersonate token Key Value expires at seconds 1671667844 token ya29 c b0AT7lpjBRmO7ghBEyMV18evd016hq token ttl 59m59s This endpoint generates a non renewable non revocable static OAuth2 access token with a max lifetime of one hour where token ttl is given in seconds and the expires at seconds is the expiry time for the token given as a Unix timestamp The token value then can be used as a HTTP Authorization Bearer token in requests to GCP APIs shell session curl H Authorization Bearer ya29 c ElodBmNPwHUNY5gcBpnXcE4ywG4w1k Service account keys To generate service account keys read from gcp key Vault returns the service account key data as a base64 encoded string in the private key data field This can be read by decoding it using base64 decode ewogICJ0e or another base64 tool of your choice The roleset or static account must have been created as type service account key shell session vault read gcp roleset my key roleset key Key Value lease id gcp key my key roleset ce563a99 5e55 389b lease duration 30m lease renewable true key algorithm KEY ALG RSA 2048 key type TYPE GOOGLE CREDENTIALS FILE private key data ewogICJ0eXBlIjogInNlcnZpY2VfYWNjb3VudCIsC This endpoint generates a new GCP IAM service account key iam keys associated with the role s Service Account When the lease expires or is revoked early the Service Account key will be deleted There is a default limit of 10 keys per Service Account For more information on this limit and recommended mitigation please see the things to note things to note section below Bindings Roleset or static account bindings define a list of resources and the associated IAM roles on that resource Bindings are used as the binding argument when creating or updating a roleset or static account and are specified in the following format using HCL hcl resource NAME roles ROLE ROLE For example hcl resource buckets my bucket roles roles storage objectAdmin roles storage legacyBucketReader At instance level using self link resource https www googleapis com compute v1 projects my project zone my zone instances my instance roles roles compute instanceAdmin v1 At project level resource cloudresourcemanager googleapis com projects my project roles roles compute instanceAdmin v1 roles iam serviceAccountUser required if managing instances that run as service accounts At folder level resource cloudresourcemanager googleapis com folders 123456 roles roles compute viewer roles deploymentmanager viewer The top level resource block defines the resource or resource path for which IAM policy information will be bound The resource path may be specified in a few different formats Project level self link a URI with scheme and host generally corresponding to the self link attribute of a resource in GCP This must include the resource nested in the parent project text compute alpha zone https www googleapis com compute alpha projects my project zones us central1 c Full resource name a schema less URI consisting of a DNS compatible API service name and resource path See the full resource name API documentation resource name full for more information text Compute snapshot compute googleapis com project my project snapshots my compute snapshot Pubsub snapshot pubsub googleapis com project my project snapshots my pubsub snapshot BigQuery dataset bigquery googleapis com projects my project datasets mydataset Resource manager cloudresourcemanager googleapis com projects my project Relative resource name A path noscheme URI path usually as accepted by the API Use this if the version or service are apparent from the resource type Please see the relative resource name API documentation resource name relative for more information text Storage bucket objects buckets my bucket buckets my bucket objects my object PubSub topics projects my project topics my pubsub topic The nested roles attribute is an array of strings names of GCP IAM roles iam roles The roles may be specified in the following formats Global role name these are global roles built into Google Cloud For the full list of available roles please see the list of predefined GCP roles predefined roles text roles viewer roles bigquery user roles billing admin Organization level custom role these are roles that are created at the organization level by organization owners text organizations my organization roles my custom role For more information please see the documentation on GCP custom roles custom roles Project level custom role these are roles that are created at a per project level by project owners text projects my project roles my custom role For more information please see the documentation on GCP custom roles custom roles Authentication The Google Cloud Vault secrets backend uses the official Google Cloud Golang SDK This means it supports the common ways of providing credentials to Google Cloud cloud creds In addition to specifying credentials directly via Vault configuration you can also get configuration from the following values on the Vault server 1 The GOOGLE APPLICATION CREDENTIALS environment variable This is specified as the path to a Google Cloud credentials file typically for a service account If this environment variable is present the resulting credentials are used If the credentials are invalid an error is returned 1 The identity of a Google Cloud workload workloads ids When Vault server is running on a Google workload like Google Compute Engine gce or Google Kubernetes Engine gke identity associated with the workload is automatically used To configure Google Compute Engine with an identity see attached service accounts attached service accounts To configure Google Kubernetes Engine with an identity see GKE workload identity gke workload ids For more information on service accounts please see the Google Cloud Service Accounts documentation service accounts To use this secrets engine the service account must have the following minimum scope s text https www googleapis com auth cloud platform Required permissions The credentials given to Vault must have the following permissions when using rolesets at the project level text Service account key admin iam serviceAccounts create iam serviceAccounts delete iam serviceAccounts get iam serviceAccounts list iam serviceAccounts update iam serviceAccountKeys create iam serviceAccountKeys delete iam serviceAccountKeys get iam serviceAccountKeys list When using static accounts or impersonated accounts Vault must have the following permissions at the service account level text For access token secrets and impersonated accounts iam serviceAccounts getAccessToken For service account keys secrets iam serviceAccountKeys create iam serviceAccountKeys delete iam serviceAccountKeys get iam serviceAccountKeys list When using rolesets or static accounts with bindings Vault must have the following permissions text IAM policy changes service resource getIamPolicy service resource setIamPolicy where service and resource correspond to permissions which will be granted for example text Projects resourcemanager projects getIamPolicy resourcemanager projects setIamPolicy All compute compute getIamPolicy compute setIamPolicy BigQuery datasets bigquery datasets get bigquery datasets update You can either Create a custom role custom roles using these permissions and assign this role at a project level Assign the set of roles required to get resource specific getIamPolicy setIamPolicy permissions At a minimum you will need to assign roles iam serviceAccountAdmin and roles iam serviceAccountKeyAdmin so Vault can manage service accounts and keys Notice that BigQuery requires different permissions than other resource This is because BigQuery currently uses legacy ACL instead of traditional IAM permissions This means to update access on the dataset Vault must be able to update the dataset s metadata Plugin Workload Identity Federation WIF EnterpriseAlert product vault The GCP secrets engine supports the plugin WIF workflow and has a source of identity called a plugin identity token The plugin identity token is a JWT that is signed internally by Vault s plugin identity token issuer vault api docs secret identity tokens read plugin workload identity issuer s openid configuration If there is a trust relationship configured between Vault and GCP through workload identity federation https cloud google com iam docs workload identity federation the secrets engine can exchange its identity token for short lived access tokens needed to perform its actions Exchanging identity tokens for access tokens lets the GCP secrets engine operate without configuring explicit access to sensitive IAM security credentials To configure the secrets engine to use plugin WIF 1 Ensure that Vault openid configuration vault api docs secret identity tokens read plugin identity token issuer s openid configuration and public JWKS vault api docs secret identity tokens read plugin identity token issuer s public jwks APIs are network reachable by GCP We recommend using an API proxy or gateway if you need to limit Vault API exposure 1 Create a workload identity pool and provider https cloud google com iam docs workload identity federation with other providers create pool provider in GCP 1 The provider URL must point at your Vault plugin identity token issuer vault api docs secret identity tokens read plugin workload identity issuer s openid configuration with the well known openid configuration suffix removed For example https host port v1 identity oidc plugins 1 Uniquely identify the recipient of the plugin identity token as the audience You can use the default audience https cloud google com iam docs workload identity federation with other providers prepare for the identity pool or a custom value less than 256 characters 1 Authenticate a workload https cloud google com iam docs workload identity federation with other providers authenticate in GCP by granting the identity pool access to a dedicated service account using service account impersonation Filter requests using the unique sub claim issued by plugin identity tokens so the GCP Auth engine can impersonate the service account sub claims have the form plugin identity NAMESPACE secret GCP SECRETS MOUNT ACCESSOR 1 Configure the GCP secrets engine with the OIDC audience value and service account email shell session vault write gcp config identity token audience iam googleapis com projects 410449834127 locations global workloadIdentityPools vault gcp secrets 43777a63 providers vault gcp secrets wif provider service account email vault plugin wif secrets hc b712f250b4e04cacbadd258a90b iam gserviceaccount com Your secrets engine can now use plugin WIF for its configuration credentials By default WIF credentials https cloud google com iam docs workload identity federation access management have a time to live of 1 hour and automatically refresh when they expire Please see the API documentation vault api docs secret gcp write config for more details on the fields associated with plugin WIF Root credential rotation If the mount is configured with credentials directly the credential s key may be rotated to a Vault generated value that is not accessible by the operator For more details on this operation please see the Root Credential Rotation vault api docs secret gcp rotate root credentials API docs Things to note Rolesets vs static accounts Advantages of rolesets Service accounts and IAM bindings are fully managed by Vault Disadvantages of rolesets Cannot easily decouple IAM bindings from the ones managed in Vault Vault requires permissions to manage IAM bindings and service accounts Advantages of static accounts Can manage IAM bindings independently from the ones managed in Vault Vault does not require permissions to IAM bindings and service accounts and only permissions related to the keys of the service account Disadvantages of static accounts Self management of service accounts is necessary Access tokens vs service account keys Advantages of access tokens Can generate infinite number of tokens per roleset Disadvantages of access tokens Cannot be used with some client libraries or tools Have a static life time of 1 hr that cannot be modified revoked or extended Advantages of service account keys Controllable life time through Vault allowing for longer access Can be used by all normal GCP tooling Disadvantages of service account keys Infinite lifetime in GCP i e if they are not managed properly leaked keys can live forever Limited to 10 per roleset service account When generating OAuth access tokens Vault will still generate a dedicated service account and key This private key is stored in Vault and is never accessible to other users and the underlying key can be rotated See the GCP API documentation api for more information on rotation Service accounts are tied to rolesets Service Accounts are created when the roleset is created or updated rather than each time a secret is generated This may be different from how other secrets engines behave but it is for good reasons IAM Service Account creation and permission propagation can take up to 60 seconds to complete By creating the Service Account in advance we speed up the timeliness of future operations and reduce the flakiness of automated workflows Each GCP project has a limit on the number of IAM Service Accounts You can request additional quota quotas The quota increase is processed by humans so it is best to request this additional quota in advance This limit is currently 100 including system managed Service Accounts If Service Accounts were created per secret this quota limit would reduce the number of secrets that can be generated Service account keys quota limits GCP IAM has a hard limit currently 10 on the number of Service Account keys Attempts to generate more keys will result in an error If you find yourself running into this limit consider the following Have shorter TTLs or revoke access earlier If you are not using past Service Account keys consider rotating and freeing quota earlier Create additional rolesets which share the same set of permissions Additional rolesets can be created with the same set of permissions This will create a new service account and increases the number of keys you can create Where possible use OAuth2 access tokens instead of Service Account keys Resources in IAM bindings must exist at roleset or static account creation Because the bindings for the Service Account are set during roleset static account creation resources that do not exist will fail the getIamPolicy API call Roleset creation may partially fail Every Service Account creation key creation and IAM policy change is a GCP API call per resource If an API call to one of these resources fails the roleset creation fails and Vault will attempt to rollback These rollbacks are API calls so they may also fail The secrets engine uses a WAL to ensure that unused bindings are cleaned up In the case of quota limits you may need to clean these up manually Do not modify vault owned IAM accounts While Vault will initially create and assign permissions to IAM service accounts it is possible that an external user deletes or modifies this service account These changes are difficult to detect and it is best to prevent this type of modification through IAM permissions Vault roleset Service Accounts will have emails in the format vault roleset prefix creation unix timestamp Communicate with your teams or use IAM permissions to not modify these resources Help amp support The Google Cloud Vault secrets engine is written as an external Vault plugin and thus exists outside the main Vault repository It is automatically bundled with Vault releases but the code is managed separately Please report issues add feature requests and submit contributions to the vault plugin secrets gcp repo on GitHub repo API The GCP secrets engine has a full HTTP API Please see the GCP secrets engine API docs api for more details api vault api docs secret gcp cloud creds https cloud google com docs authentication production providing credentials to your application custom roles https cloud google com iam docs creating custom roles gce https cloud google com compute gke https cloud google com kubernetes engine iam keys https cloud google com iam docs service accounts service account keys iam roles https cloud google com iam docs understanding roles predefined roles https cloud google com iam docs understanding roles predefined roles repo https github com hashicorp vault plugin secrets gcp resource name full https cloud google com apis design resource names full resource name resource name relative https cloud google com apis design resource names relative resource name quotas https cloud google com compute quotas service accounts https cloud google com compute docs access service accounts workloads ids https cloud google com iam docs workload identities attached service accounts https cloud google com iam docs workload identities attached service accounts gke workload ids https cloud google com iam docs workload identities kubernetes workload identity Upgrade guides Deprecation of access token leases NOTE This deprecation only affects access tokens There is no change to the service account key secret type Previous versions of this secrets engine Vault 0 11 1 created a lease for each access token secret We have removed them after discovering that these tokens specifically Google OAuth2 tokens for IAM service accounts are non revocable and have a static 60 minute lifetime To match the current limitations of the GCP APIs the secrets engine will no longer allow for revocation or manage the token TTL more specifically the access token response will no longer include lease id or other lease information This change does not reflect any change to the actual underlying OAuth tokens or GCP service accounts To upgrade Remove references from lease id lease duration or other lease attributes when reading responses for the access tokens secrets endpoint i e from gcp token roleset See the documentation for access tokens access tokens to see the new format for the response Be aware of leftover leases from previous versions While these old leases will still be revocable they will not actually invalidate their associated access token and that token will still be useable for up to one hour |
vault AWS secrets engine page title AWS Secrets Engines layout docs The AWS secrets engine for Vault generates access keys dynamically based on IAM policies | ---
layout: docs
page_title: AWS - Secrets Engines
description: |-
The AWS secrets engine for Vault generates access keys dynamically based on
IAM policies.
---
# AWS secrets engine
The AWS secrets engine generates AWS access credentials dynamically based on IAM
policies. This generally makes working with AWS IAM easier, since it does not
involve clicking in the web UI. Additionally, the process is codified and mapped
to internal auth methods (such as LDAP). The AWS IAM credentials are time-based
and are automatically revoked when the Vault lease expires.
Vault supports four different types of credentials to retrieve from AWS:
1. `iam_user`: Vault will create an IAM user for each lease, attach the managed
and inline IAM policies as specified in the role to the user, and if a
[permissions
boundary](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html)
is specified on the role, the permissions boundary will also be attached.
Vault will then generate an access key and secret key for the IAM user and
return them to the caller. IAM users have no session tokens and so no
session token will be returned. Vault will delete the IAM user upon reaching the TTL expiration.
2. `assumed_role`: Vault will call
[sts:AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html)
and return the access key, secret key, and session token to the caller.
3. `federation_token`: Vault will call
[sts:GetFederationToken](https://docs.aws.amazon.com/STS/latest/APIReference/API_GetFederationToken.html)
passing in the supplied AWS policy document and return the access key, secret
key, and session token to the caller.
4. `session_token`: Vault will call
[sts:GetSessionToken](https://docs.aws.amazon.com/STS/latest/APIReference/API_GetSessionToken.html)
and return the access key, secret key, and session token to the caller.
### Static roles
The AWS secrets engine supports the concept of "static roles", which are
a 1-to-1 mapping of Vault Roles to IAM users. The current password
for the user is stored and automatically rotated by Vault on a
configurable period of time. This is in contrast to dynamic secrets, where a
unique username and password pair are generated with each credential request.
When credentials are requested for the Role, Vault returns the current
Access Key ID and Secret Access Key for the configured user, allowing anyone with the proper
Vault policies to have access to the IAM credentials.
Please see the [API documentation](/vault/api-docs/secret/aws#create-static-role) for details on this feature.
## Setup
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. Enable the AWS secrets engine:
```text
$ vault secrets enable aws
Success! Enabled the aws secrets engine at: aws/
```
By default, the secrets engine will mount at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure the credentials that Vault uses to communicate with AWS to generate
the IAM credentials:
```text
$ vault write aws/config/root \
access_key=AKIAJWVN5Z4FOFT7NLNA \
secret_key=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 \
region=us-east-1
```
Internally, Vault will connect to AWS using these credentials. As such,
these credentials must be a superset of any policies which might be granted
on IAM credentials. Since Vault uses the official AWS SDK, it will use the
specified credentials. You can also specify the credentials via the standard
AWS environment credentials, shared file credentials, or IAM role/ECS task
credentials. (Note that you can't authorize vault with IAM role credentials if you plan
on using STS Federation Tokens, since the temporary security credentials
associated with the role are not authorized to use GetFederationToken.)
In some cases, you cannot set sensitive IAM security credentials in your
Vault configuration. For example, your organization may require that all
security credentials are short-lived or explicitly tied to a machine identity.
To provide IAM security credentials to Vault, we recommend using Vault
[plugin workload identity federation](#plugin-workload-identity-federation-wif)
(WIF).
~> **Notice:** Even though the path above is `aws/config/root`, do not use
your AWS root account credentials. Instead, generate a dedicated user or
role.
1. Alternatively, configure the audience claim value and the role ARN to assume for plugin workload identity federation:
```text
$ vault write aws/config/root \
identity_token_audience="<TOKEN AUDIENCE>" \
role_arn="<AWS ROLE ARN>"
```
Vault's identity token provider will internally sign the plugin identity token JWT.
Given a trust relationship is configured between Vault and AWS via
Web Identity Federation, the secrets engine can exchange this identity token to obtain
ephemeral STS credentials.
~> **Notice:** For this trust relationship to be established, AWS must have an
an [IAM OIDC identity provider](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html)
configured with information about the fully qualified and network-reachable
Issuer URL for Vault's plugin [identity token provider](/vault/api-docs/secret/identity/tokens#read-plugin-identity-well-known-configurations).
This is to ensure that AWS can fetch the JWKS [public keys](/vault/api-docs/secret/identity/tokens#read-active-public-keys)
and verify the plugin identity token signature. To configure Vault's Issuer,
please refer to the Identity Tokens
[documentation](/vault/api-docs/secret/identity/tokens#configure-the-identity-tokens-backend)
1. Configure a Vault role that maps to a set of permissions in AWS as well as an
AWS credential type. When users generate credentials, they are generated
against this role. An example:
```text
$ vault write aws/roles/my-role \
credential_type=iam_user \
policy_document=-<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:*",
"Resource": "*"
}
]
}
EOF
```
This creates a role named "my-role". When users generate credentials against
this role, Vault will create an IAM user and attach the specified policy
document to the IAM user. Vault will then create an access key and secret
key for the IAM user and return these credentials. You supply a
user inline policy and/or provide references to an existing AWS policy's full
ARN and/or a list of IAM groups:
```text
$ vault write aws/roles/my-other-role \
policy_arns=arn:aws:iam::aws:policy/AmazonEC2ReadOnlyAccess,arn:aws:iam::aws:policy/IAMReadOnlyAccess \
iam_groups=group1,group2 \
credential_type=iam_user \
policy_document=-<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:*",
"Resource": "*"
}
]
}
EOF
```
For more information on IAM policies, please see the
[AWS IAM policy documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/PoliciesOverview.html).
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can generate credentials.
1. Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```text
$ vault read aws/creds/my-role
Key Value
--- -----
lease_id aws/creds/my-role/f3e92392-7d9c-09c8-c921-575d62fe80d8
lease_duration 768h
lease_renewable true
access_key AKIAIOWQXTLW36DV7IEA
secret_key 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
session_token <nil>
```
Each invocation of the command will generate a new credential.
Unfortunately, IAM credentials are eventually consistent with respect to
other Amazon services. If you are planning on using these credential in a
pipeline, you may need to add a delay of 5-10 seconds (or more) after
fetching credentials before they can be used successfully.
If you want to be able to use credentials without the wait, consider using
the STS method of fetching keys. IAM credentials supported by an STS token
are available for use as soon as they are generated.
1. Rotate the credentials that Vault uses to communicate with AWS:
```text
$ vault write -f aws/config/rotate-root
Key Value
--- -----
access_key AKIA3ALIVABCDG5XC8H4
```
<Note>
Calls from Vault to AWS may fail immediately after calling `aws/config/rotate-root` until
AWS becomes consistent again. Refer to
the <a href="/vault/api-docs/secret/aws#rotate-root-iam-credentials">AWS secrets engine API</a> reference
for additional information on rotating IAM credentials.
</Note>
## IAM permissions policy for Vault
The `aws/config/root` credentials need permission to manage dynamic IAM users.
Here is an example AWS IAM policy that grants the most commonly required
permissions Vault needs:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:AttachUserPolicy",
"iam:CreateAccessKey",
"iam:CreateUser",
"iam:DeleteAccessKey",
"iam:DeleteUser",
"iam:DeleteUserPolicy",
"iam:DetachUserPolicy",
"iam:GetUser",
"iam:ListAccessKeys",
"iam:ListAttachedUserPolicies",
"iam:ListGroupsForUser",
"iam:ListUserPolicies",
"iam:PutUserPolicy",
"iam:AddUserToGroup",
"iam:RemoveUserFromGroup",
"iam:TagUser"
],
"Resource": ["arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:user/vault-*"]
}
]
}
```
Vault also supports AWS Permissions Boundaries when creating IAM users. If you
wish to enforce that Vault always attaches a permissions boundary to an IAM
user, you can use a policy like:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:CreateAccessKey",
"iam:DeleteAccessKey",
"iam:DeleteUser",
"iam:GetUser",
"iam:ListAccessKeys",
"iam:ListAttachedUserPolicies",
"iam:ListGroupsForUser",
"iam:ListUserPolicies",
"iam:AddUserToGroup",
"iam:RemoveUserFromGroup"
],
"Resource": ["arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:user/vault-*"]
},
{
"Effect": "Allow",
"Action": [
"iam:AttachUserPolicy",
"iam:CreateUser",
"iam:DeleteUserPolicy",
"iam:DetachUserPolicy",
"iam:PutUserPolicy"
],
"Resource": ["arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:user/vault-*"],
"Condition": {
"StringEquals": {
"iam:PermissionsBoundary": [
"arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:policy/PolicyName"
]
}
}
}
]
}
```
where the "iam:PermissionsBoundary" condition contains the list of permissions
boundary policies that you wish to ensure that Vault uses. This policy will
ensure that Vault uses one of the permissions boundaries specified (not all of
them).
## Plugin Workload Identity Federation (WIF)
<EnterpriseAlert product="vault" />
The AWS secrets engine supports the Plugin WIF workflow, and has a source of identity called
a plugin identity token. The plugin identity token is a JWT that is internally signed by Vault's
[plugin identity token issuer](/vault/api-docs/secret/identity/tokens#read-plugin-workload-identity-issuer-s-openid-configuration).
If there is a trust relationship configured between Vault and AWS through
[Web Identity Federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html),
the secrets engine can exchange its identity token for short-lived STS credentials needed to
perform its actions.
Exchanging identity tokens for STS credentials lets the AWS secrets engine
operate without configuring explicit access to sensitive IAM security
credentials.
To configure the secrets engine to use plugin WIF:
1. Ensure that Vault [openid-configuration](/vault/api-docs/secret/identity/tokens#read-plugin-identity-token-issuer-s-openid-configuration)
and [public JWKS](/vault/api-docs/secret/identity/tokens#read-plugin-identity-token-issuer-s-public-jwks)
APIs are network-reachable by AWS. We recommend using an API proxy or gateway
if you need to limit Vault API exposure.
1. Create an
[IAM OIDC identity provider](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html)
in AWS.
1. The provider URL **must** point at your [Vault plugin identity token issuer](/vault/api-docs/secret/identity/tokens#read-plugin-workload-identity-issuer-s-openid-configuration) with the
`/.well-known/openid-configuration` suffix removed. For example:
`https://host:port/v1/identity/oidc/plugins`.
1. The audience should uniquely identify the recipient of the plugin identity
token. In AWS, the recipient is the identity provider. We recommend using
the `host:port/v1/identity/oidc/plugins` portion of the provider URL as your
recipient since it will be unique for each configured identity provider.
1. Create a [web identity role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp_oidc.html#idp_oidc_Create)
in AWS with the same audience used for your IAM OIDC identity provider.
1. Configure the AWS secrets engine with the IAM OIDC audience value and web
identity role ARN.
```shell-session
$ vault write aws/config/root \
identity_token_audience="vault.example/v1/identity/oidc/plugins" \
role_arn="arn:aws:iam::123456789123:role/example-web-identity-role"
```
Your secrets engine can now use plugin WIF for its configuration credentials.
By default, WIF [credentials](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithWebIdentity.html)
have a time-to-live of 1 hour and automatically refresh when they expire.
Please see the [API documentation](/vault/api-docs/secret/aws#configure-root-credentials)
for more details on the fields associated with plugin WIF.
## STS credentials
The above demonstrated usage with `iam_user` credential types. As mentioned,
Vault also supports `assumed_role`, `federation_token`, and `session_token`
credential types.
### STS federation tokens
~> **Notice:** Due to limitations in AWS, in order to use the `federation_token`
credential type, Vault **must** be configured with IAM user credentials. AWS
does not allow temporary credentials (such as those from an IAM instance
profile) to be used.
An STS federation token inherits a set of permissions that are the combination
(intersection) of four sets of permissions:
1. The permissions granted to the `aws/config/root` credentials
2. The user inline policy configured in the Vault role
3. The managed policy ARNs configured in the Vault role
4. An implicit deny policy on IAM or STS operations.
Roles with a `credential_type` of `federation_token` can specify one or more of
the `policy_document`, `policy_arns`, and `iam_groups` parameters in the Vault
role.
The `aws/config/root` credentials require IAM permissions for
`sts:GetFederationToken` and the permissions to delegate to the STS
federation token. For example, this policy on the `aws/config/root` credentials
would allow creation of an STS federated token with delegated `ec2:*`
permissions (or any subset of `ec2:*` permissions):
```javascript
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": [
"ec2:*",
"sts:GetFederationToken"
],
"Resource": "*"
}
}
```
An `ec2_admin` role would then assign an inline policy with the same `ec2:*`
permissions.
```shell-session
$ vault write aws/roles/ec2_admin \
credential_type=federation_token \
[email protected]
```
The policy.json file would contain an inline policy with similar permissions,
less the `sts:GetFederationToken` permission. (We could grant
`sts:GetFederationToken` permissions, but STS attaches attach an implicit deny
that overrides the allow.)
```javascript
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "ec2:*",
"Resource": "*"
}
}
```
To generate a new set of STS federation token credentials, we simply write to
the role using the aws/sts endpoint:
```shell-session
$ vault write aws/sts/ec2_admin ttl=60m
Key Value
lease_id aws/sts/ec2_admin/31d771a6-fb39-f46b-fdc5-945109106422
lease_duration 60m0s
lease_renewable false
access_key ASIAJYYYY2AA5K4WIXXX
secret_key HSs0DYYYYYY9W81DXtI0K7X84H+OVZXK5BXXXX
session_token AQoDYXdzEEwasAKwQyZUtZaCjVNDiXXXXXXXXgUgBBVUUbSyujLjsw6jYzboOQ89vUVIehUw/9MreAifXFmfdbjTr3g6zc0me9M+dB95DyhetFItX5QThw0lEsVQWSiIeIotGmg7mjT1//e7CJc4LpxbW707loFX1TYD1ilNnblEsIBKGlRNXZ+QJdguY4VkzXxv2urxIH0Sl14xtqsRPboV7eYruSEZlAuP3FLmqFbmA0AFPCT37cLf/vUHinSbvw49C4c9WQLH7CeFPhDub7/rub/QU/lCjjJ43IqIRo9jYgcEvvdRkQSt70zO8moGCc7pFvmL7XGhISegQpEzudErTE/PdhjlGpAKGR3d5qKrHpPYK/k480wk1Ai/t1dTa/8/3jUYTUeIkaJpNBnupQt7qoaXXXXXXXXXX
```
### STS Session Tokens
The `session_token` credential type is used to generate short-lived credentials under the root config.
To create these with Vault and AWS, you must configure Vault to use IAM user credentials. AWS does not
allow temporary credentials, like those from an IAM instance profile, to be used when generating session tokens.
<Warning>
STS session tokens inherit any and all permissions granted to the user configured in `aws/config/root`.
In this expample, the `temp_user` role will obtain a policy with the same `ec2:*` permissions as the
root config. For this reason, assigning a role or policy is disallowed for this credential type.
</Warning>
```shell-session
$ vault write aws/roles/temp_user \
credential_type=session_token
```
To generate a new set of STS federation token credentials, write to the `temp_user`
role using the `aws/creds` endpoint:
```shell-session
$ vault read aws/sts/temp_user ttl=60m
Key Value
lease_id aws/creds/temp_user/w4eKbMaJOi1xLqG3MWk7y8n6
lease_duration 60m0s
lease_renewable false
access_key ASIAJYYYY2AA5K4WIXXX
secret_key HSs0DYYYYYY9W81DXtI0K7X84H+OVZXK5BXXXX
session_token AQoDYXdzEEwasAKwQyZUtZaCjVNDiXXXXXXXXgUgBBVUUbSyujLjsw6jYzboOQ89vUVIehUw/9MreAifXFmfdbjTr3g6zc0me9M+dB95DyhetFItX5QThw0lEsVQWSiIeIotGmg7mjT1//e7CJc4LpxbW707loFX1TYD1ilNnblEsIBKGlRNXZ+QJdguY4VkzXxv2urxIH0Sl14xtqsRPboV7eYruSEZlAuP3FLmqFbmA0AFPCT37cLf/vUHinSbvw49C4c9WQLH7CeFPhDub7/rub/QU/lCjjJ43IqIRo9jYgcEvvdRkQSt70zO8moGCc7pFvmL7XGhISegQpEzudErTE/PdhjlGpAKGR3d5qKrHpPYK/k480wk1Ai/t1dTa/8/3jUYTUeIkaJpNBnupQt7qoaXXXXXXXXXX
```
Session tokens may also require an MFA-based TOTP to be provided if the IAM user is configured to require it.
If so, the Vault role requires the MFA device serial number to be set, and the TOTP may be provided when
reading credentials from the Vault role.
```shell-session
$ vault write aws/roles/mfa_user \
credential_type=session_token \
mfa_serial_number="arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:mfa/device-name"
```
```shell-session
$ vault read aws/creds/mfa_user mfa_code=123456
```
### STS AssumeRole
The `assumed_role` credential type is typically used for cross-account
authentication or single sign-on (SSO) scenarios. In order to use an
`assumed_role` credential type, you must configure outside of Vault:
1. An IAM role
2. IAM inline policies and/or managed policies attached to the IAM role
3. IAM trust policy attached to the IAM role to grant privileges for Vault to
assume the role
`assumed_role` credentials offer a few benefits over `federation_token`:
1. Assumed roles can invoke IAM and STS operations, if granted by the role's
IAM policies.
2. Assumed roles support cross-account authentication
3. Temporary credentials (such as those granted by running Vault on an EC2
instance in an IAM instance profile) can retrieve `assumed_role` credentials
(but cannot retrieve `federation_token` credentials).
The `aws/config/root` credentials must be allowed `sts:AssumeRole` through one of
two methods:
1. The credentials have an IAM policy attached to them against the target role:
```javascript
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:role/RoleNameToAssume"
}
}
```
1. A trust policy is attached to the target IAM role for the principal:
```javascript
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:user/VAULT-AWS-ROOT-CONFIG-USER-NAME"
},
"Action": "sts:AssumeRole"
}
]
}
```
When specifying a Vault role with a `credential_type` of `assumed_role`, you can
specify more than one IAM role ARN. If you do so, Vault clients can select which
role ARN they would like to assume when retrieving credentials from that role.
Further, you can specify both a `policy_document` and `policy_arns` parameters;
if specified, each acts as a filter on the IAM permissions granted to the
assumed role. If `iam_groups` is specified, the inline and attached policies for
each IAM group will be added to the `policy_document` and `policy_arns`
parameters, respectively, when calling [sts:AssumeRole]. For an action to be
allowed, it must be permitted by both the IAM policy on the AWS role that is
assumed, the `policy_document` specified on the Vault role (if specified), and
the managed policies specified by the `policy_arns` parameter. (The
`policy_document` parameter is passed in as the `Policy` parameter to the
[sts:AssumeRole] API call, while the `policy_arns` parameter is passed in as the
`PolicyArns` parameter to the same call.)
Note: When multiple `role_arns` are specified, clients requesting credentials
can specify any of the role ARNs that are defined on the Vault role in order to
retrieve credentials. However, when `policy_document`, `policy_arns`, or
`iam_groups` are specified, that will apply to ALL role credentials retrieved
from AWS.
Let's create a "deploy" policy using the arn of our role to assume:
```shell-session
$ vault write aws/roles/deploy \
role_arns=arn:aws:iam::ACCOUNT-ID-WITHOUT-HYPHENS:role/RoleNameToAssume \
credential_type=assumed_role
```
To generate a new set of STS assumed role credentials, we again write to
the role using the aws/sts endpoint:
```shell-session
$ vault write aws/sts/deploy ttl=60m
Key Value
lease_id aws/sts/deploy/31d771a6-fb39-f46b-fdc5-945109106422
lease_duration 60m0s
lease_renewable false
access_key ASIAJYYYY2AA5K4WIXXX
secret_key HSs0DYYYYYY9W81DXtI0K7X84H+OVZXK5BXXXX
session_token AQoDYXdzEEwasAKwQyZUtZaCjVNDiXXXXXXXXgUgBBVUUbSyujLjsw6jYzboOQ89vUVIehUw/9MreAifXFmfdbjTr3g6zc0me9M+dB95DyhetFItX5QThw0lEsVQWSiIeIotGmg7mjT1//e7CJc4LpxbW707loFX1TYD1ilNnblEsIBKGlRNXZ+QJdguY4VkzXxv2urxIH0Sl14xtqsRPboV7eYruSEZlAuP3FLmqFbmA0AFPCT37cLf/vUHinSbvw49C4c9WQLH7CeFPhDub7/rub/QU/lCjjJ43IqIRo9jYgcEvvdRkQSt70zO8moGCc7pFvmL7XGhISegQpEzudErTE/PdhjlGpAKGR3d5qKrHpPYK/k480wk1Ai/t1dTa/8/3jUYTUeIkaJpNBnupQt7qoaXXXXXXXXXX
```
[sts:assumerole]: https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html
## Troubleshooting
### Dynamic IAM user errors
If you get an error message similar to either of the following, the root credentials that you wrote to `aws/config/root` have insufficient privilege:
```shell-session
$ vault read aws/creds/deploy
* Error creating IAM user: User: arn:aws:iam::000000000000:user/hashicorp is not authorized to perform: iam:CreateUser on resource: arn:aws:iam::000000000000:user/vault-root-1432735386-4059
$ vault revoke aws/creds/deploy/774cfb27-c22d-6e78-0077-254879d1af3c
Revoke error: Error making API request.
URL: POST http://127.0.0.1:8200/v1/sys/revoke/aws/creds/deploy/774cfb27-c22d-6e78-0077-254879d1af3c
Code: 400. Errors:
* invalid request
```
If you get stuck at any time, simply run `vault path-help aws` or with a subpath for
interactive help output.
### STS federated token errors
Vault generates STS tokens using the IAM credentials passed to `aws/config`.
Those credentials must have two properties:
- They must have permissions to call `sts:GetFederationToken`.
- The capabilities of those credentials have to be at least as permissive as those requested
by policies attached to the STS creds.
If either of those conditions are not met, a "403 not-authorized" error will be returned.
See http://docs.aws.amazon.com/STS/latest/APIReference/API_GetFederationToken.html for more details.
Vault 0.5.1 or later is recommended when using STS tokens to avoid validation
errors for exceeding the AWS limit of 32 characters on STS token names.
<Note title="AWS character limit includes path">
The AWS character limit for token names **includes** the full path to
the token. For example, `aws/sts/dev005_vault-test_testtest` (34
characters) exceeds the limit , but `aws/roles/dev005_vaulttest-test` (31
characters) does not.
</Note>
### AWS instance metadata timeouts
@include 'aws-imds-timeout.mdx'
## API
The AWS secrets engine has a full HTTP API. Please see the
[AWS secrets engine API](/vault/api-docs/secret/aws) for more
details. | vault | layout docs page title AWS Secrets Engines description The AWS secrets engine for Vault generates access keys dynamically based on IAM policies AWS secrets engine The AWS secrets engine generates AWS access credentials dynamically based on IAM policies This generally makes working with AWS IAM easier since it does not involve clicking in the web UI Additionally the process is codified and mapped to internal auth methods such as LDAP The AWS IAM credentials are time based and are automatically revoked when the Vault lease expires Vault supports four different types of credentials to retrieve from AWS 1 iam user Vault will create an IAM user for each lease attach the managed and inline IAM policies as specified in the role to the user and if a permissions boundary https docs aws amazon com IAM latest UserGuide access policies boundaries html is specified on the role the permissions boundary will also be attached Vault will then generate an access key and secret key for the IAM user and return them to the caller IAM users have no session tokens and so no session token will be returned Vault will delete the IAM user upon reaching the TTL expiration 2 assumed role Vault will call sts AssumeRole https docs aws amazon com STS latest APIReference API AssumeRole html and return the access key secret key and session token to the caller 3 federation token Vault will call sts GetFederationToken https docs aws amazon com STS latest APIReference API GetFederationToken html passing in the supplied AWS policy document and return the access key secret key and session token to the caller 4 session token Vault will call sts GetSessionToken https docs aws amazon com STS latest APIReference API GetSessionToken html and return the access key secret key and session token to the caller Static roles The AWS secrets engine supports the concept of static roles which are a 1 to 1 mapping of Vault Roles to IAM users The current password for the user is stored and automatically rotated by Vault on a configurable period of time This is in contrast to dynamic secrets where a unique username and password pair are generated with each credential request When credentials are requested for the Role Vault returns the current Access Key ID and Secret Access Key for the configured user allowing anyone with the proper Vault policies to have access to the IAM credentials Please see the API documentation vault api docs secret aws create static role for details on this feature Setup Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the AWS secrets engine text vault secrets enable aws Success Enabled the aws secrets engine at aws By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure the credentials that Vault uses to communicate with AWS to generate the IAM credentials text vault write aws config root access key AKIAJWVN5Z4FOFT7NLNA secret key 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 region us east 1 Internally Vault will connect to AWS using these credentials As such these credentials must be a superset of any policies which might be granted on IAM credentials Since Vault uses the official AWS SDK it will use the specified credentials You can also specify the credentials via the standard AWS environment credentials shared file credentials or IAM role ECS task credentials Note that you can t authorize vault with IAM role credentials if you plan on using STS Federation Tokens since the temporary security credentials associated with the role are not authorized to use GetFederationToken In some cases you cannot set sensitive IAM security credentials in your Vault configuration For example your organization may require that all security credentials are short lived or explicitly tied to a machine identity To provide IAM security credentials to Vault we recommend using Vault plugin workload identity federation plugin workload identity federation wif WIF Notice Even though the path above is aws config root do not use your AWS root account credentials Instead generate a dedicated user or role 1 Alternatively configure the audience claim value and the role ARN to assume for plugin workload identity federation text vault write aws config root identity token audience TOKEN AUDIENCE role arn AWS ROLE ARN Vault s identity token provider will internally sign the plugin identity token JWT Given a trust relationship is configured between Vault and AWS via Web Identity Federation the secrets engine can exchange this identity token to obtain ephemeral STS credentials Notice For this trust relationship to be established AWS must have an an IAM OIDC identity provider https docs aws amazon com IAM latest UserGuide id roles providers create oidc html configured with information about the fully qualified and network reachable Issuer URL for Vault s plugin identity token provider vault api docs secret identity tokens read plugin identity well known configurations This is to ensure that AWS can fetch the JWKS public keys vault api docs secret identity tokens read active public keys and verify the plugin identity token signature To configure Vault s Issuer please refer to the Identity Tokens documentation vault api docs secret identity tokens configure the identity tokens backend 1 Configure a Vault role that maps to a set of permissions in AWS as well as an AWS credential type When users generate credentials they are generated against this role An example text vault write aws roles my role credential type iam user policy document EOF Version 2012 10 17 Statement Effect Allow Action ec2 Resource EOF This creates a role named my role When users generate credentials against this role Vault will create an IAM user and attach the specified policy document to the IAM user Vault will then create an access key and secret key for the IAM user and return these credentials You supply a user inline policy and or provide references to an existing AWS policy s full ARN and or a list of IAM groups text vault write aws roles my other role policy arns arn aws iam aws policy AmazonEC2ReadOnlyAccess arn aws iam aws policy IAMReadOnlyAccess iam groups group1 group2 credential type iam user policy document EOF Version 2012 10 17 Statement Effect Allow Action ec2 Resource EOF For more information on IAM policies please see the AWS IAM policy documentation https docs aws amazon com IAM latest UserGuide PoliciesOverview html Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can generate credentials 1 Generate a new credential by reading from the creds endpoint with the name of the role text vault read aws creds my role Key Value lease id aws creds my role f3e92392 7d9c 09c8 c921 575d62fe80d8 lease duration 768h lease renewable true access key AKIAIOWQXTLW36DV7IEA secret key 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 session token nil Each invocation of the command will generate a new credential Unfortunately IAM credentials are eventually consistent with respect to other Amazon services If you are planning on using these credential in a pipeline you may need to add a delay of 5 10 seconds or more after fetching credentials before they can be used successfully If you want to be able to use credentials without the wait consider using the STS method of fetching keys IAM credentials supported by an STS token are available for use as soon as they are generated 1 Rotate the credentials that Vault uses to communicate with AWS text vault write f aws config rotate root Key Value access key AKIA3ALIVABCDG5XC8H4 Note Calls from Vault to AWS may fail immediately after calling aws config rotate root until AWS becomes consistent again Refer to the a href vault api docs secret aws rotate root iam credentials AWS secrets engine API a reference for additional information on rotating IAM credentials Note IAM permissions policy for Vault The aws config root credentials need permission to manage dynamic IAM users Here is an example AWS IAM policy that grants the most commonly required permissions Vault needs json Version 2012 10 17 Statement Effect Allow Action iam AttachUserPolicy iam CreateAccessKey iam CreateUser iam DeleteAccessKey iam DeleteUser iam DeleteUserPolicy iam DetachUserPolicy iam GetUser iam ListAccessKeys iam ListAttachedUserPolicies iam ListGroupsForUser iam ListUserPolicies iam PutUserPolicy iam AddUserToGroup iam RemoveUserFromGroup iam TagUser Resource arn aws iam ACCOUNT ID WITHOUT HYPHENS user vault Vault also supports AWS Permissions Boundaries when creating IAM users If you wish to enforce that Vault always attaches a permissions boundary to an IAM user you can use a policy like json Version 2012 10 17 Statement Effect Allow Action iam CreateAccessKey iam DeleteAccessKey iam DeleteUser iam GetUser iam ListAccessKeys iam ListAttachedUserPolicies iam ListGroupsForUser iam ListUserPolicies iam AddUserToGroup iam RemoveUserFromGroup Resource arn aws iam ACCOUNT ID WITHOUT HYPHENS user vault Effect Allow Action iam AttachUserPolicy iam CreateUser iam DeleteUserPolicy iam DetachUserPolicy iam PutUserPolicy Resource arn aws iam ACCOUNT ID WITHOUT HYPHENS user vault Condition StringEquals iam PermissionsBoundary arn aws iam ACCOUNT ID WITHOUT HYPHENS policy PolicyName where the iam PermissionsBoundary condition contains the list of permissions boundary policies that you wish to ensure that Vault uses This policy will ensure that Vault uses one of the permissions boundaries specified not all of them Plugin Workload Identity Federation WIF EnterpriseAlert product vault The AWS secrets engine supports the Plugin WIF workflow and has a source of identity called a plugin identity token The plugin identity token is a JWT that is internally signed by Vault s plugin identity token issuer vault api docs secret identity tokens read plugin workload identity issuer s openid configuration If there is a trust relationship configured between Vault and AWS through Web Identity Federation https docs aws amazon com IAM latest UserGuide id roles providers oidc html the secrets engine can exchange its identity token for short lived STS credentials needed to perform its actions Exchanging identity tokens for STS credentials lets the AWS secrets engine operate without configuring explicit access to sensitive IAM security credentials To configure the secrets engine to use plugin WIF 1 Ensure that Vault openid configuration vault api docs secret identity tokens read plugin identity token issuer s openid configuration and public JWKS vault api docs secret identity tokens read plugin identity token issuer s public jwks APIs are network reachable by AWS We recommend using an API proxy or gateway if you need to limit Vault API exposure 1 Create an IAM OIDC identity provider https docs aws amazon com IAM latest UserGuide id roles providers create oidc html in AWS 1 The provider URL must point at your Vault plugin identity token issuer vault api docs secret identity tokens read plugin workload identity issuer s openid configuration with the well known openid configuration suffix removed For example https host port v1 identity oidc plugins 1 The audience should uniquely identify the recipient of the plugin identity token In AWS the recipient is the identity provider We recommend using the host port v1 identity oidc plugins portion of the provider URL as your recipient since it will be unique for each configured identity provider 1 Create a web identity role https docs aws amazon com IAM latest UserGuide id roles create for idp oidc html idp oidc Create in AWS with the same audience used for your IAM OIDC identity provider 1 Configure the AWS secrets engine with the IAM OIDC audience value and web identity role ARN shell session vault write aws config root identity token audience vault example v1 identity oidc plugins role arn arn aws iam 123456789123 role example web identity role Your secrets engine can now use plugin WIF for its configuration credentials By default WIF credentials https docs aws amazon com STS latest APIReference API AssumeRoleWithWebIdentity html have a time to live of 1 hour and automatically refresh when they expire Please see the API documentation vault api docs secret aws configure root credentials for more details on the fields associated with plugin WIF STS credentials The above demonstrated usage with iam user credential types As mentioned Vault also supports assumed role federation token and session token credential types STS federation tokens Notice Due to limitations in AWS in order to use the federation token credential type Vault must be configured with IAM user credentials AWS does not allow temporary credentials such as those from an IAM instance profile to be used An STS federation token inherits a set of permissions that are the combination intersection of four sets of permissions 1 The permissions granted to the aws config root credentials 2 The user inline policy configured in the Vault role 3 The managed policy ARNs configured in the Vault role 4 An implicit deny policy on IAM or STS operations Roles with a credential type of federation token can specify one or more of the policy document policy arns and iam groups parameters in the Vault role The aws config root credentials require IAM permissions for sts GetFederationToken and the permissions to delegate to the STS federation token For example this policy on the aws config root credentials would allow creation of an STS federated token with delegated ec2 permissions or any subset of ec2 permissions javascript Version 2012 10 17 Statement Effect Allow Action ec2 sts GetFederationToken Resource An ec2 admin role would then assign an inline policy with the same ec2 permissions shell session vault write aws roles ec2 admin credential type federation token policy document policy json The policy json file would contain an inline policy with similar permissions less the sts GetFederationToken permission We could grant sts GetFederationToken permissions but STS attaches attach an implicit deny that overrides the allow javascript Version 2012 10 17 Statement Effect Allow Action ec2 Resource To generate a new set of STS federation token credentials we simply write to the role using the aws sts endpoint shell session vault write aws sts ec2 admin ttl 60m Key Value lease id aws sts ec2 admin 31d771a6 fb39 f46b fdc5 945109106422 lease duration 60m0s lease renewable false access key ASIAJYYYY2AA5K4WIXXX secret key HSs0DYYYYYY9W81DXtI0K7X84H OVZXK5BXXXX session token AQoDYXdzEEwasAKwQyZUtZaCjVNDiXXXXXXXXgUgBBVUUbSyujLjsw6jYzboOQ89vUVIehUw 9MreAifXFmfdbjTr3g6zc0me9M dB95DyhetFItX5QThw0lEsVQWSiIeIotGmg7mjT1 e7CJc4LpxbW707loFX1TYD1ilNnblEsIBKGlRNXZ QJdguY4VkzXxv2urxIH0Sl14xtqsRPboV7eYruSEZlAuP3FLmqFbmA0AFPCT37cLf vUHinSbvw49C4c9WQLH7CeFPhDub7 rub QU lCjjJ43IqIRo9jYgcEvvdRkQSt70zO8moGCc7pFvmL7XGhISegQpEzudErTE PdhjlGpAKGR3d5qKrHpPYK k480wk1Ai t1dTa 8 3jUYTUeIkaJpNBnupQt7qoaXXXXXXXXXX STS Session Tokens The session token credential type is used to generate short lived credentials under the root config To create these with Vault and AWS you must configure Vault to use IAM user credentials AWS does not allow temporary credentials like those from an IAM instance profile to be used when generating session tokens Warning STS session tokens inherit any and all permissions granted to the user configured in aws config root In this expample the temp user role will obtain a policy with the same ec2 permissions as the root config For this reason assigning a role or policy is disallowed for this credential type Warning shell session vault write aws roles temp user credential type session token To generate a new set of STS federation token credentials write to the temp user role using the aws creds endpoint shell session vault read aws sts temp user ttl 60m Key Value lease id aws creds temp user w4eKbMaJOi1xLqG3MWk7y8n6 lease duration 60m0s lease renewable false access key ASIAJYYYY2AA5K4WIXXX secret key HSs0DYYYYYY9W81DXtI0K7X84H OVZXK5BXXXX session token AQoDYXdzEEwasAKwQyZUtZaCjVNDiXXXXXXXXgUgBBVUUbSyujLjsw6jYzboOQ89vUVIehUw 9MreAifXFmfdbjTr3g6zc0me9M dB95DyhetFItX5QThw0lEsVQWSiIeIotGmg7mjT1 e7CJc4LpxbW707loFX1TYD1ilNnblEsIBKGlRNXZ QJdguY4VkzXxv2urxIH0Sl14xtqsRPboV7eYruSEZlAuP3FLmqFbmA0AFPCT37cLf vUHinSbvw49C4c9WQLH7CeFPhDub7 rub QU lCjjJ43IqIRo9jYgcEvvdRkQSt70zO8moGCc7pFvmL7XGhISegQpEzudErTE PdhjlGpAKGR3d5qKrHpPYK k480wk1Ai t1dTa 8 3jUYTUeIkaJpNBnupQt7qoaXXXXXXXXXX Session tokens may also require an MFA based TOTP to be provided if the IAM user is configured to require it If so the Vault role requires the MFA device serial number to be set and the TOTP may be provided when reading credentials from the Vault role shell session vault write aws roles mfa user credential type session token mfa serial number arn aws iam ACCOUNT ID WITHOUT HYPHENS mfa device name shell session vault read aws creds mfa user mfa code 123456 STS AssumeRole The assumed role credential type is typically used for cross account authentication or single sign on SSO scenarios In order to use an assumed role credential type you must configure outside of Vault 1 An IAM role 2 IAM inline policies and or managed policies attached to the IAM role 3 IAM trust policy attached to the IAM role to grant privileges for Vault to assume the role assumed role credentials offer a few benefits over federation token 1 Assumed roles can invoke IAM and STS operations if granted by the role s IAM policies 2 Assumed roles support cross account authentication 3 Temporary credentials such as those granted by running Vault on an EC2 instance in an IAM instance profile can retrieve assumed role credentials but cannot retrieve federation token credentials The aws config root credentials must be allowed sts AssumeRole through one of two methods 1 The credentials have an IAM policy attached to them against the target role javascript Version 2012 10 17 Statement Effect Allow Action sts AssumeRole Resource arn aws iam ACCOUNT ID WITHOUT HYPHENS role RoleNameToAssume 1 A trust policy is attached to the target IAM role for the principal javascript Version 2012 10 17 Statement Effect Allow Principal AWS arn aws iam ACCOUNT ID WITHOUT HYPHENS user VAULT AWS ROOT CONFIG USER NAME Action sts AssumeRole When specifying a Vault role with a credential type of assumed role you can specify more than one IAM role ARN If you do so Vault clients can select which role ARN they would like to assume when retrieving credentials from that role Further you can specify both a policy document and policy arns parameters if specified each acts as a filter on the IAM permissions granted to the assumed role If iam groups is specified the inline and attached policies for each IAM group will be added to the policy document and policy arns parameters respectively when calling sts AssumeRole For an action to be allowed it must be permitted by both the IAM policy on the AWS role that is assumed the policy document specified on the Vault role if specified and the managed policies specified by the policy arns parameter The policy document parameter is passed in as the Policy parameter to the sts AssumeRole API call while the policy arns parameter is passed in as the PolicyArns parameter to the same call Note When multiple role arns are specified clients requesting credentials can specify any of the role ARNs that are defined on the Vault role in order to retrieve credentials However when policy document policy arns or iam groups are specified that will apply to ALL role credentials retrieved from AWS Let s create a deploy policy using the arn of our role to assume shell session vault write aws roles deploy role arns arn aws iam ACCOUNT ID WITHOUT HYPHENS role RoleNameToAssume credential type assumed role To generate a new set of STS assumed role credentials we again write to the role using the aws sts endpoint shell session vault write aws sts deploy ttl 60m Key Value lease id aws sts deploy 31d771a6 fb39 f46b fdc5 945109106422 lease duration 60m0s lease renewable false access key ASIAJYYYY2AA5K4WIXXX secret key HSs0DYYYYYY9W81DXtI0K7X84H OVZXK5BXXXX session token AQoDYXdzEEwasAKwQyZUtZaCjVNDiXXXXXXXXgUgBBVUUbSyujLjsw6jYzboOQ89vUVIehUw 9MreAifXFmfdbjTr3g6zc0me9M dB95DyhetFItX5QThw0lEsVQWSiIeIotGmg7mjT1 e7CJc4LpxbW707loFX1TYD1ilNnblEsIBKGlRNXZ QJdguY4VkzXxv2urxIH0Sl14xtqsRPboV7eYruSEZlAuP3FLmqFbmA0AFPCT37cLf vUHinSbvw49C4c9WQLH7CeFPhDub7 rub QU lCjjJ43IqIRo9jYgcEvvdRkQSt70zO8moGCc7pFvmL7XGhISegQpEzudErTE PdhjlGpAKGR3d5qKrHpPYK k480wk1Ai t1dTa 8 3jUYTUeIkaJpNBnupQt7qoaXXXXXXXXXX sts assumerole https docs aws amazon com STS latest APIReference API AssumeRole html Troubleshooting Dynamic IAM user errors If you get an error message similar to either of the following the root credentials that you wrote to aws config root have insufficient privilege shell session vault read aws creds deploy Error creating IAM user User arn aws iam 000000000000 user hashicorp is not authorized to perform iam CreateUser on resource arn aws iam 000000000000 user vault root 1432735386 4059 vault revoke aws creds deploy 774cfb27 c22d 6e78 0077 254879d1af3c Revoke error Error making API request URL POST http 127 0 0 1 8200 v1 sys revoke aws creds deploy 774cfb27 c22d 6e78 0077 254879d1af3c Code 400 Errors invalid request If you get stuck at any time simply run vault path help aws or with a subpath for interactive help output STS federated token errors Vault generates STS tokens using the IAM credentials passed to aws config Those credentials must have two properties They must have permissions to call sts GetFederationToken The capabilities of those credentials have to be at least as permissive as those requested by policies attached to the STS creds If either of those conditions are not met a 403 not authorized error will be returned See http docs aws amazon com STS latest APIReference API GetFederationToken html for more details Vault 0 5 1 or later is recommended when using STS tokens to avoid validation errors for exceeding the AWS limit of 32 characters on STS token names Note title AWS character limit includes path The AWS character limit for token names includes the full path to the token For example aws sts dev005 vault test testtest 34 characters exceeds the limit but aws roles dev005 vaulttest test 31 characters does not Note AWS instance metadata timeouts include aws imds timeout mdx API The AWS secrets engine has a full HTTP API Please see the AWS secrets engine API vault api docs secret aws for more details |
vault KMIP secrets engine The KMIP secrets engine allows Vault to act as a KMIP server provider and handle the lifecycle of its KMIP managed objects layout docs page title KMIP Secrets Engines | ---
layout: docs
page_title: KMIP - Secrets Engines
description: |-
The KMIP secrets engine allows Vault to act as a KMIP server provider and
handle the lifecycle of its KMIP managed objects.
---
# KMIP secrets engine
@include 'alerts/enterprise-and-hcp.mdx'
KMIP secrets engine requires [Vault Enterprise](https://www.hashicorp.com/products/vault/pricing)
with the Advanced Data Protection (ADP) module.
The KMIP secrets engine allows Vault to act as a [Key Management
Interoperability Protocol][kmip-spec] (KMIP) server provider and handle
the lifecycle of its KMIP managed objects. KMIP is a standardized protocol that allows
services and applications to perform cryptographic operations without having to
manage cryptographic material, otherwise known as managed objects, by delegating
its storage and lifecycle to a key management server.
Vault's KMIP secrets engine listens on a separate port from the standard Vault
listener. Each Vault server in a Vault cluster configured with a KMIP secrets
engine uses the same listener configuration. The KMIP listener defaults to port
5696 and is configurable to alternative ports, for example, if there are
multiple KMIP secrets engine mounts configured. KMIP clients connect and
authenticate to this KMIP secrets engine listener port using generated TLS
certificates. KMIP clients may connect directly to the Vault active server, or
any of the Vault performance standby servers, on the configured KMIP port. A
layer 4 tcp load balancer may be used in front of the Vault server's KMIP ports.
The load balancer should support long-lived connections and it may use a round
robin routing algorithm as Vault servers will forward to the primary Vault
server, if necessary.
## KMIP conformance
Vault implements version 1.4 of the following Key Management Interoperability Protocol Profiles:
* [Baseline Server][baseline-server]
* Supports all profile attributes except for *Key Value Location*.
* Supports all profile operations except for *Check*.
* Operation *Locate* supports all profile attributes except for *Key Value Location*.
* [Symmetric Key Lifecycle Server][lifecycle-server]
* Supports cryptographic algorithm *AES* (*3DES* is not supported).
* Only the *Raw* key format type is supported. (*Transparent Symmetric Key* is not supported).
* [Basic Cryptographic Server][basic-cryptographic-server]
* Supports block cipher modes *CBC*, *CFB*, *CTR*, *ECB*, *GCM*, and *OFB*.
* On multi-part (streaming) operations, block cipher mode *GCM* is not supported.
* The supported padding methods are *None* and *PKCS5*.
* [Asymmetric Key Lifecycle Server][asymmetric-key-lifecycle-server]
* Supports *Public Key* and *Private Key* objects.
* Supports *RSA* cryptographic algorithm
* Supports *PKCS#1*, *PKCS#8*, *X.509*, *Transparent RSA Public Key* and *Transparent RSA Private Key* key format types.
* [Advanced Cryptographic Server][advanced-cryptographic-server]
* Supports *Encrypt*, *Decrypt*, *Sign*, *Signature Verify*, *MAC*, *MAC Verify*, *RNG Retrieve*, and *RNG Seed* client-to-server operations.
* The supported hashing algorithms for Sign and Signature Verify operations are *SHA224*, *SHA256*, *SHA384*, *SHA512*, *RIPEMD160*, *SHA512_224*, *SHA512_256*, *SHA3_224*, *SHA3_256*, *SHA3_384*, and *SHA3_512* for *PSS* padding method, and algorithms *SHA224*, *SHA256*, *SHA384*, *SHA512*, and *RIPEMD160* for *PKCS1v15* padding method.
* The supported hashing algorithms for MAC and MAC Verify operations are *SHA224*, *SHA256*, *SHA384*, *SHA512*, *RIPEMD160*, *SHA512_224*, *SHA512_256*, *SHA3_224*, *SHA3_256*, *SHA3_384*, and *SHA3_512* (*MD4*, *MD5*, and *SHA1* are not supported).
Refer to [KMIP - Profiles Support](/vault/docs/secrets/kmip-profiles) page for more details.
[baseline-server]: http://docs.oasis-open.org/kmip/profiles/v1.4/os/kmip-profiles-v1.4-os.html#_Toc491431430
[lifecycle-server]: http://docs.oasis-open.org/kmip/profiles/v1.4/os/kmip-profiles-v1.4-os.html#_Toc491431487
[basic-cryptographic-server]: http://docs.oasis-open.org/kmip/profiles/v1.4/os/kmip-profiles-v1.4-os.html#_Toc491431527
[asymmetric-key-lifecycle-server]: http://docs.oasis-open.org/kmip/profiles/v1.4/os/kmip-profiles-v1.4-os.html#_Toc491431516
[advanced-cryptographic-server]: http://docs.oasis-open.org/kmip/profiles/v1.4/os/kmip-profiles-v1.4-os.html#_Toc491431528
## Setup
The KMIP secrets engine must be configured before it can start accepting KMIP
requests.
1. Enable the KMIP secrets engine
```text
$ vault secrets enable kmip
Success! Enabled the kmip secrets engine at: kmip/
```
1. Configure the secrets engine with the desired listener addresses to use and
TLS parameters, or leave unwritten to use default values
```text
$ vault write kmip/config listen_addrs=0.0.0.0:5696
```
### KMIP Certificate Authority for Client Certificates
When the KMIP Secrets Engine is initially configured, Vault generates a KMIP
Certificate Authority (CA) whose only purpose is to authenticate KMIP client
certificates.
Vault uses the internal KMIP CA to generate certificates for clients
authenticating to Vault with the KMIP protocol. You cannot import external KMIP
authorities. All KMIP authentication must use the internally-generated KMIP CA.
## Usage
### Scopes and roles
The KMIP secrets engine uses the concept of scopes to partition KMIP managed
object storage into multiple named buckets. Within a scope, roles can be created
which dictate the set of allowed operations that the particular role can perform.
TLS client certificates can be generated for a role, which services and applications
can then use when sending KMIP requests against Vault's KMIP secret engine.
In order to generate client certificates for KMIP clients to interact with Vault's
KMIP server, we must first create a scope and role and specify the desired set of
allowed operations for it.
1. Create a scope:
```text
$ vault write -f kmip/scope/my-service
Success! Data written to: kmip/scope/my-service
```
1. Create a role within the scope, specifying the set of operations to allow or
deny.
```text
$ vault write kmip/scope/my-service/role/admin operation_all=true
Success! Data written to: kmip/scope/my-service/role/admin
```
### Supported KMIP operations
The KMIP secrets engine currently supports the following set of operations:
```text
operation_activate
operation_add_attribute
operation_create
operation_create_keypair
operation_decrypt
operation_delete_attribute
operation_destroy
operation_discover_versions
operation_encrypt
operation_get
operation_get_attribute_list
operation_get_attributes
operation_import
operation_locate
operation_mac
operation_mac_verify
operation_modify_attribute
operation_query
operation_register
operation_rekey
operation_rekey_keypair
operation_revoke
operation_sign
operation_signature_verify
operation_rng_seed
operation_rng_retrieve
```
Additionally, there are two pseudo-operations that can be used to allow or deny
all operation capabilities to a role. These operations are mutually exclusive to
all other operations. That is, if it's provided during role creation or update,
no other operations can be provided. Similarly, if an existing role contains a
pseudo-operation, and it is then updated with a set supported operation, it will
be overwritten with the newly set of provided operations.
Pseudo-operations:
```text
operation_all
operation_none
```
### Client certificate generation
Once a scope and role has been created, client certificates can be generated for
that role. The client certificate can then be provided to applications and
services that support KMIP to establish communication with Vault's KMIP server.
Scope and role identifiers are embedded in the certificate,
which will be used when evaluating permissions during a KMIP request.
1. Generate a client certificate. This returns the CA Chain, the certificate,
and the private key.
```text
$ vault write -f kmip/scope/my-service/role/admin/credential/generate
Key Value
--- -----
ca_chain [-----BEGIN CERTIFICATE-----
MIICNTCCAZigAwIBAgIUKqNFb3Zy+8ypIhTDs/2/8f/xEI8wCgYIKoZIzj0EAwIw
HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyN1oX
DTI5MDYyMTE4MjQ1N1owKjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWlu
dGVybWVkaWF0ZTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEAbniGNXHOiPvSb0I
fbc1B9QkOmdT2Ecx2WaQPLISplmO0Jm0u0z11CGuf3Igby7unnCNvCuCXrKJFCsQ
8JGhwknNAG3eesSZxG4tklA6FMZjE9ETUtYfjH7Z4vuJSw/fxOeey7fhrqAzhV3P
GRkvA9EQUHJOeV4rEpiINP/fneHNfsn1o2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYD
VR0TAQH/BAgwBgEB/wIBCTAdBgNVHQ4EFgQUR0o0v4rPiBU9RwQfEUucx3JwbPAw
HwYDVR0jBBgwFoAUMhORultSN+ABogxQdkt7KChD0wQwCgYIKoZIzj0EAwIDgYoA
MIGGAkF1IvkIaXNkVfe+q0V78CnX0XIJuvmPpgjN8AQzqLci8txikd9gF1zt8fFQ
gIKERm2QPrshSV9srHDB0YnThRKuiQJBNcDjCfYOzqKlBHifT4WT4OX1U6nP/Y2b
imGaLJK9VIwfcJOpVCFGp7Xi8QGV6rJIFiQAqzqCy69vcU6nVMsvens=
-----END CERTIFICATE----- -----BEGIN CERTIFICATE-----
MIICKjCCAYugAwIBAgIUerDfApmkq0VYychkhlxEnBlIDUcwCgYIKoZIzj0EAwIw
HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyNloX
DTI5MDYyMTE4MjQ1NlowHTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MIGb
MBAGByqGSM49AgEGBSuBBAAjA4GGAAQBA466Axrrz+HWanNe35gPVvB7OE7TWZcc
QZw1QSMQ+QIQMu5NcdfvZfh68exhe1FiJezKB+zeoJWp1Q/kqhyh7fsAFUuIcJDO
okZYPTmjPh3h5IZLPg5r7Pw1j99rLHhc/EXF9wYVy2UeH/2IqGJ+cncmVgqczlG8
m36g9OXd6hkofhCjZjBkMA4GA1UdDwEB/wQEAwIBBjASBgNVHRMBAf8ECDAGAQH/
AgEKMB0GA1UdDgQWBBQyE5G6W1I34AGiDFB2S3soKEPTBDAfBgNVHSMEGDAWgBQy
E5G6W1I34AGiDFB2S3soKEPTBDAKBggqhkjOPQQDAgOBjAAwgYgCQgGtPVCtgDc1
0SrTsVpEtUMYQKbOWnTKNHZ9h5jSna8n9aY+70Ai3U57q3FL95iIhZRW79PRpp65
d6tWqY51o2hHpwJCAK+eE7xpdnqh5H8TqAXKVuSoC0WEsovYCD03c8Ih3jWcZn6N
kbz2kXPcAk+dE6ncnwhwqNQgsJQGgQzJroH+Zzvb
-----END CERTIFICATE-----]
certificate -----BEGIN CERTIFICATE-----
MIICOzCCAZygAwIBAgIUN5V7bLAGu8QIUFxlIugg8fBb+eYwCgYIKoZIzj0EAwIw
KjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWludGVybWVkaWF0ZTAeFw0x
OTA2MjQxODQ3MTdaFw0xOTA2MjUxODQ3NDdaMCAxDjAMBgNVBAsTBWNqVVNJMQ4w
DAYDVQQDEwVkdjRZbTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEANVsHV8CHYpW
CBKbYVEx/sLphk67SdWxbII4Sc9Rj1KymApD4gPmS+rw0FDMZGFbn1sAfpqMBqMj
ylv72o9izbYSALHnYT+AaE0NFn4eGWZ2G0p56cVmfXm3ZI959E+3gvZK6X5Jnzm4
FKXTDKGA4pocYec/rnYJ5X8sbAJKHvk1OeO+o2cwZTAOBgNVHQ8BAf8EBAMCA6gw
EwYDVR0lBAwwCgYIKwYBBQUHAwIwHQYDVR0OBBYEFBEIsBo3HiBIg2l2psaQoYkT
D1RNMB8GA1UdIwQYMBaAFEdKNL+Kz4gVPUcEHxFLnMdycGzwMAoGCCqGSM49BAMC
A4GMADCBiAJCAc8DV23DJsHV4fdmbmssu0eDIgNH+PrRKdYgqiHptbuVjF2qbILp
Z34dJRVN+R9B+RprZXkYiv7gJ/47KSUKzRZpAkIByMjZqLtcypamJM/t+/O1BSst
CWcblb45FIxAmO4hE00Q5wnwXNxNnDHXWiuGdSNmIBjpb9nM5wehQlbkx7HzvPk=
-----END CERTIFICATE-----
private_key -----BEGIN EC PRIVATE KEY-----
MIHcAgEBBEIB9Nn7M28VUVW6g5IlOTS3bHIZYM/zqVy+PvYQxn2lFbg1YrQzfd7h
sdtCjet0lc7pvtoOwd1dFiATOGg98OVN7MegBwYFK4EEACOhgYkDgYYABADVbB1f
Ah2KVggSm2FRMf7C6YZOu0nVsWyCOEnPUY9SspgKQ+ID5kvq8NBQzGRhW59bAH6a
jAajI8pb+9qPYs22EgCx52E/gGhNDRZ+HhlmdhtKeenFZn15t2SPefRPt4L2Sul+
SZ85uBSl0wyhgOKaHGHnP652CeV/LGwCSh75NTnjvg==
-----END EC PRIVATE KEY-----
serial_number 317328055225536560033788492808123425026102524390
```
### Client certificate signing
As an alternative to the above section on generating client certificates,
the KMIP secrets engine supports signing of Certificate Signing Requests
(CSRs). Normally the above generation process is simpler, but some KMIP
clients prefer (or only support) retaining the private key associated
with their client certificate.
1. In this workflow the first step is KMIP-client dependent: use the KMIP
client's UI or CLI to create a client certificate CSR in PEM format.
2. Sign the client certificate. This returns the CA Chain and the certificate,
but not the private key, which never leaves the KMIP client.
```text
$ vault write kmip/scope/my-service/role/admin/credential/sign csr="$(cat my-csr.pem)"
Key Value
--- -----
ca_chain [-----BEGIN CERTIFICATE-----
MIICNTCCAZigAwIBAgIUKqNFb3Zy+8ypIhTDs/2/8f/xEI8wCgYIKoZIzj0EAwIw
HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyN1oX
DTI5MDYyMTE4MjQ1N1owKjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWlu
dGVybWVkaWF0ZTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEAbniGNXHOiPvSb0I
fbc1B9QkOmdT2Ecx2WaQPLISplmO0Jm0u0z11CGuf3Igby7unnCNvCuCXrKJFCsQ
8JGhwknNAG3eesSZxG4tklA6FMZjE9ETUtYfjH7Z4vuJSw/fxOeey7fhrqAzhV3P
GRkvA9EQUHJOeV4rEpiINP/fneHNfsn1o2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYD
VR0TAQH/BAgwBgEB/wIBCTAdBgNVHQ4EFgQUR0o0v4rPiBU9RwQfEUucx3JwbPAw
HwYDVR0jBBgwFoAUMhORultSN+ABogxQdkt7KChD0wQwCgYIKoZIzj0EAwIDgYoA
MIGGAkF1IvkIaXNkVfe+q0V78CnX0XIJuvmPpgjN8AQzqLci8txikd9gF1zt8fFQ
gIKERm2QPrshSV9srHDB0YnThRKuiQJBNcDjCfYOzqKlBHifT4WT4OX1U6nP/Y2b
imGaLJK9VIwfcJOpVCFGp7Xi8QGV6rJIFiQAqzqCy69vcU6nVMsvens=
-----END CERTIFICATE----- -----BEGIN CERTIFICATE-----
MIICKjCCAYugAwIBAgIUerDfApmkq0VYychkhlxEnBlIDUcwCgYIKoZIzj0EAwIw
HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyNloX
DTI5MDYyMTE4MjQ1NlowHTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MIGb
MBAGByqGSM49AgEGBSuBBAAjA4GGAAQBA466Axrrz+HWanNe35gPVvB7OE7TWZcc
QZw1QSMQ+QIQMu5NcdfvZfh68exhe1FiJezKB+zeoJWp1Q/kqhyh7fsAFUuIcJDO
okZYPTmjPh3h5IZLPg5r7Pw1j99rLHhc/EXF9wYVy2UeH/2IqGJ+cncmVgqczlG8
m36g9OXd6hkofhCjZjBkMA4GA1UdDwEB/wQEAwIBBjASBgNVHRMBAf8ECDAGAQH/
AgEKMB0GA1UdDgQWBBQyE5G6W1I34AGiDFB2S3soKEPTBDAfBgNVHSMEGDAWgBQy
E5G6W1I34AGiDFB2S3soKEPTBDAKBggqhkjOPQQDAgOBjAAwgYgCQgGtPVCtgDc1
0SrTsVpEtUMYQKbOWnTKNHZ9h5jSna8n9aY+70Ai3U57q3FL95iIhZRW79PRpp65
d6tWqY51o2hHpwJCAK+eE7xpdnqh5H8TqAXKVuSoC0WEsovYCD03c8Ih3jWcZn6N
kbz2kXPcAk+dE6ncnwhwqNQgsJQGgQzJroH+Zzvb
-----END CERTIFICATE-----]
certificate -----BEGIN CERTIFICATE-----
MIICOzCCAZygAwIBAgIUN5V7bLAGu8QIUFxlIugg8fBb+eYwCgYIKoZIzj0EAwIw
KjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWludGVybWVkaWF0ZTAeFw0x
OTA2MjQxODQ3MTdaFw0xOTA2MjUxODQ3NDdaMCAxDjAMBgNVBAsTBWNqVVNJMQ4w
DAYDVQQDEwVkdjRZbTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEANVsHV8CHYpW
CBKbYVEx/sLphk67SdWxbII4Sc9Rj1KymApD4gPmS+rw0FDMZGFbn1sAfpqMBqMj
ylv72o9izbYSALHnYT+AaE0NFn4eGWZ2G0p56cVmfXm3ZI959E+3gvZK6X5Jnzm4
FKXTDKGA4pocYec/rnYJ5X8sbAJKHvk1OeO+o2cwZTAOBgNVHQ8BAf8EBAMCA6gw
EwYDVR0lBAwwCgYIKwYBBQUHAwIwHQYDVR0OBBYEFBEIsBo3HiBIg2l2psaQoYkT
D1RNMB8GA1UdIwQYMBaAFEdKNL+Kz4gVPUcEHxFLnMdycGzwMAoGCCqGSM49BAMC
A4GMADCBiAJCAc8DV23DJsHV4fdmbmssu0eDIgNH+PrRKdYgqiHptbuVjF2qbILp
Z34dJRVN+R9B+RprZXkYiv7gJ/47KSUKzRZpAkIByMjZqLtcypamJM/t+/O1BSst
CWcblb45FIxAmO4hE00Q5wnwXNxNnDHXWiuGdSNmIBjpb9nM5wehQlbkx7HzvPk=
-----END CERTIFICATE-----
serial_number 317328055225536560033788492808123425026102524390
```
## Tutorial
Refer to the [KMIP Secrets Engine](/vault/tutorials/adp/kmip-engine)
guide for a step-by-step tutorial.
[kmip-spec]: http://docs.oasis-open.org/kmip/spec/v1.4/kmip-spec-v1.4.html
[kmip-ops]: http://docs.oasis-open.org/kmip/spec/v1.4/os/kmip-spec-v1.4-os.html#_Toc490660840 | vault | layout docs page title KMIP Secrets Engines description The KMIP secrets engine allows Vault to act as a KMIP server provider and handle the lifecycle of its KMIP managed objects KMIP secrets engine include alerts enterprise and hcp mdx KMIP secrets engine requires Vault Enterprise https www hashicorp com products vault pricing with the Advanced Data Protection ADP module The KMIP secrets engine allows Vault to act as a Key Management Interoperability Protocol kmip spec KMIP server provider and handle the lifecycle of its KMIP managed objects KMIP is a standardized protocol that allows services and applications to perform cryptographic operations without having to manage cryptographic material otherwise known as managed objects by delegating its storage and lifecycle to a key management server Vault s KMIP secrets engine listens on a separate port from the standard Vault listener Each Vault server in a Vault cluster configured with a KMIP secrets engine uses the same listener configuration The KMIP listener defaults to port 5696 and is configurable to alternative ports for example if there are multiple KMIP secrets engine mounts configured KMIP clients connect and authenticate to this KMIP secrets engine listener port using generated TLS certificates KMIP clients may connect directly to the Vault active server or any of the Vault performance standby servers on the configured KMIP port A layer 4 tcp load balancer may be used in front of the Vault server s KMIP ports The load balancer should support long lived connections and it may use a round robin routing algorithm as Vault servers will forward to the primary Vault server if necessary KMIP conformance Vault implements version 1 4 of the following Key Management Interoperability Protocol Profiles Baseline Server baseline server Supports all profile attributes except for Key Value Location Supports all profile operations except for Check Operation Locate supports all profile attributes except for Key Value Location Symmetric Key Lifecycle Server lifecycle server Supports cryptographic algorithm AES 3DES is not supported Only the Raw key format type is supported Transparent Symmetric Key is not supported Basic Cryptographic Server basic cryptographic server Supports block cipher modes CBC CFB CTR ECB GCM and OFB On multi part streaming operations block cipher mode GCM is not supported The supported padding methods are None and PKCS5 Asymmetric Key Lifecycle Server asymmetric key lifecycle server Supports Public Key and Private Key objects Supports RSA cryptographic algorithm Supports PKCS 1 PKCS 8 X 509 Transparent RSA Public Key and Transparent RSA Private Key key format types Advanced Cryptographic Server advanced cryptographic server Supports Encrypt Decrypt Sign Signature Verify MAC MAC Verify RNG Retrieve and RNG Seed client to server operations The supported hashing algorithms for Sign and Signature Verify operations are SHA224 SHA256 SHA384 SHA512 RIPEMD160 SHA512 224 SHA512 256 SHA3 224 SHA3 256 SHA3 384 and SHA3 512 for PSS padding method and algorithms SHA224 SHA256 SHA384 SHA512 and RIPEMD160 for PKCS1v15 padding method The supported hashing algorithms for MAC and MAC Verify operations are SHA224 SHA256 SHA384 SHA512 RIPEMD160 SHA512 224 SHA512 256 SHA3 224 SHA3 256 SHA3 384 and SHA3 512 MD4 MD5 and SHA1 are not supported Refer to KMIP Profiles Support vault docs secrets kmip profiles page for more details baseline server http docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html Toc491431430 lifecycle server http docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html Toc491431487 basic cryptographic server http docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html Toc491431527 asymmetric key lifecycle server http docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html Toc491431516 advanced cryptographic server http docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html Toc491431528 Setup The KMIP secrets engine must be configured before it can start accepting KMIP requests 1 Enable the KMIP secrets engine text vault secrets enable kmip Success Enabled the kmip secrets engine at kmip 1 Configure the secrets engine with the desired listener addresses to use and TLS parameters or leave unwritten to use default values text vault write kmip config listen addrs 0 0 0 0 5696 KMIP Certificate Authority for Client Certificates When the KMIP Secrets Engine is initially configured Vault generates a KMIP Certificate Authority CA whose only purpose is to authenticate KMIP client certificates Vault uses the internal KMIP CA to generate certificates for clients authenticating to Vault with the KMIP protocol You cannot import external KMIP authorities All KMIP authentication must use the internally generated KMIP CA Usage Scopes and roles The KMIP secrets engine uses the concept of scopes to partition KMIP managed object storage into multiple named buckets Within a scope roles can be created which dictate the set of allowed operations that the particular role can perform TLS client certificates can be generated for a role which services and applications can then use when sending KMIP requests against Vault s KMIP secret engine In order to generate client certificates for KMIP clients to interact with Vault s KMIP server we must first create a scope and role and specify the desired set of allowed operations for it 1 Create a scope text vault write f kmip scope my service Success Data written to kmip scope my service 1 Create a role within the scope specifying the set of operations to allow or deny text vault write kmip scope my service role admin operation all true Success Data written to kmip scope my service role admin Supported KMIP operations The KMIP secrets engine currently supports the following set of operations text operation activate operation add attribute operation create operation create keypair operation decrypt operation delete attribute operation destroy operation discover versions operation encrypt operation get operation get attribute list operation get attributes operation import operation locate operation mac operation mac verify operation modify attribute operation query operation register operation rekey operation rekey keypair operation revoke operation sign operation signature verify operation rng seed operation rng retrieve Additionally there are two pseudo operations that can be used to allow or deny all operation capabilities to a role These operations are mutually exclusive to all other operations That is if it s provided during role creation or update no other operations can be provided Similarly if an existing role contains a pseudo operation and it is then updated with a set supported operation it will be overwritten with the newly set of provided operations Pseudo operations text operation all operation none Client certificate generation Once a scope and role has been created client certificates can be generated for that role The client certificate can then be provided to applications and services that support KMIP to establish communication with Vault s KMIP server Scope and role identifiers are embedded in the certificate which will be used when evaluating permissions during a KMIP request 1 Generate a client certificate This returns the CA Chain the certificate and the private key text vault write f kmip scope my service role admin credential generate Key Value ca chain BEGIN CERTIFICATE MIICNTCCAZigAwIBAgIUKqNFb3Zy 8ypIhTDs 2 8f xEI8wCgYIKoZIzj0EAwIw HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyN1oX DTI5MDYyMTE4MjQ1N1owKjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWlu dGVybWVkaWF0ZTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEAbniGNXHOiPvSb0I fbc1B9QkOmdT2Ecx2WaQPLISplmO0Jm0u0z11CGuf3Igby7unnCNvCuCXrKJFCsQ 8JGhwknNAG3eesSZxG4tklA6FMZjE9ETUtYfjH7Z4vuJSw fxOeey7fhrqAzhV3P GRkvA9EQUHJOeV4rEpiINP fneHNfsn1o2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYD VR0TAQH BAgwBgEB wIBCTAdBgNVHQ4EFgQUR0o0v4rPiBU9RwQfEUucx3JwbPAw HwYDVR0jBBgwFoAUMhORultSN ABogxQdkt7KChD0wQwCgYIKoZIzj0EAwIDgYoA MIGGAkF1IvkIaXNkVfe q0V78CnX0XIJuvmPpgjN8AQzqLci8txikd9gF1zt8fFQ gIKERm2QPrshSV9srHDB0YnThRKuiQJBNcDjCfYOzqKlBHifT4WT4OX1U6nP Y2b imGaLJK9VIwfcJOpVCFGp7Xi8QGV6rJIFiQAqzqCy69vcU6nVMsvens END CERTIFICATE BEGIN CERTIFICATE MIICKjCCAYugAwIBAgIUerDfApmkq0VYychkhlxEnBlIDUcwCgYIKoZIzj0EAwIw HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyNloX DTI5MDYyMTE4MjQ1NlowHTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MIGb MBAGByqGSM49AgEGBSuBBAAjA4GGAAQBA466Axrrz HWanNe35gPVvB7OE7TWZcc QZw1QSMQ QIQMu5NcdfvZfh68exhe1FiJezKB zeoJWp1Q kqhyh7fsAFUuIcJDO okZYPTmjPh3h5IZLPg5r7Pw1j99rLHhc EXF9wYVy2UeH 2IqGJ cncmVgqczlG8 m36g9OXd6hkofhCjZjBkMA4GA1UdDwEB wQEAwIBBjASBgNVHRMBAf8ECDAGAQH AgEKMB0GA1UdDgQWBBQyE5G6W1I34AGiDFB2S3soKEPTBDAfBgNVHSMEGDAWgBQy E5G6W1I34AGiDFB2S3soKEPTBDAKBggqhkjOPQQDAgOBjAAwgYgCQgGtPVCtgDc1 0SrTsVpEtUMYQKbOWnTKNHZ9h5jSna8n9aY 70Ai3U57q3FL95iIhZRW79PRpp65 d6tWqY51o2hHpwJCAK eE7xpdnqh5H8TqAXKVuSoC0WEsovYCD03c8Ih3jWcZn6N kbz2kXPcAk dE6ncnwhwqNQgsJQGgQzJroH Zzvb END CERTIFICATE certificate BEGIN CERTIFICATE MIICOzCCAZygAwIBAgIUN5V7bLAGu8QIUFxlIugg8fBb eYwCgYIKoZIzj0EAwIw KjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWludGVybWVkaWF0ZTAeFw0x OTA2MjQxODQ3MTdaFw0xOTA2MjUxODQ3NDdaMCAxDjAMBgNVBAsTBWNqVVNJMQ4w DAYDVQQDEwVkdjRZbTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEANVsHV8CHYpW CBKbYVEx sLphk67SdWxbII4Sc9Rj1KymApD4gPmS rw0FDMZGFbn1sAfpqMBqMj ylv72o9izbYSALHnYT AaE0NFn4eGWZ2G0p56cVmfXm3ZI959E 3gvZK6X5Jnzm4 FKXTDKGA4pocYec rnYJ5X8sbAJKHvk1OeO o2cwZTAOBgNVHQ8BAf8EBAMCA6gw EwYDVR0lBAwwCgYIKwYBBQUHAwIwHQYDVR0OBBYEFBEIsBo3HiBIg2l2psaQoYkT D1RNMB8GA1UdIwQYMBaAFEdKNL Kz4gVPUcEHxFLnMdycGzwMAoGCCqGSM49BAMC A4GMADCBiAJCAc8DV23DJsHV4fdmbmssu0eDIgNH PrRKdYgqiHptbuVjF2qbILp Z34dJRVN R9B RprZXkYiv7gJ 47KSUKzRZpAkIByMjZqLtcypamJM t O1BSst CWcblb45FIxAmO4hE00Q5wnwXNxNnDHXWiuGdSNmIBjpb9nM5wehQlbkx7HzvPk END CERTIFICATE private key BEGIN EC PRIVATE KEY MIHcAgEBBEIB9Nn7M28VUVW6g5IlOTS3bHIZYM zqVy PvYQxn2lFbg1YrQzfd7h sdtCjet0lc7pvtoOwd1dFiATOGg98OVN7MegBwYFK4EEACOhgYkDgYYABADVbB1f Ah2KVggSm2FRMf7C6YZOu0nVsWyCOEnPUY9SspgKQ ID5kvq8NBQzGRhW59bAH6a jAajI8pb 9qPYs22EgCx52E gGhNDRZ HhlmdhtKeenFZn15t2SPefRPt4L2Sul SZ85uBSl0wyhgOKaHGHnP652CeV LGwCSh75NTnjvg END EC PRIVATE KEY serial number 317328055225536560033788492808123425026102524390 Client certificate signing As an alternative to the above section on generating client certificates the KMIP secrets engine supports signing of Certificate Signing Requests CSRs Normally the above generation process is simpler but some KMIP clients prefer or only support retaining the private key associated with their client certificate 1 In this workflow the first step is KMIP client dependent use the KMIP client s UI or CLI to create a client certificate CSR in PEM format 2 Sign the client certificate This returns the CA Chain and the certificate but not the private key which never leaves the KMIP client text vault write kmip scope my service role admin credential sign csr cat my csr pem Key Value ca chain BEGIN CERTIFICATE MIICNTCCAZigAwIBAgIUKqNFb3Zy 8ypIhTDs 2 8f xEI8wCgYIKoZIzj0EAwIw HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyN1oX DTI5MDYyMTE4MjQ1N1owKjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWlu dGVybWVkaWF0ZTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEAbniGNXHOiPvSb0I fbc1B9QkOmdT2Ecx2WaQPLISplmO0Jm0u0z11CGuf3Igby7unnCNvCuCXrKJFCsQ 8JGhwknNAG3eesSZxG4tklA6FMZjE9ETUtYfjH7Z4vuJSw fxOeey7fhrqAzhV3P GRkvA9EQUHJOeV4rEpiINP fneHNfsn1o2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYD VR0TAQH BAgwBgEB wIBCTAdBgNVHQ4EFgQUR0o0v4rPiBU9RwQfEUucx3JwbPAw HwYDVR0jBBgwFoAUMhORultSN ABogxQdkt7KChD0wQwCgYIKoZIzj0EAwIDgYoA MIGGAkF1IvkIaXNkVfe q0V78CnX0XIJuvmPpgjN8AQzqLci8txikd9gF1zt8fFQ gIKERm2QPrshSV9srHDB0YnThRKuiQJBNcDjCfYOzqKlBHifT4WT4OX1U6nP Y2b imGaLJK9VIwfcJOpVCFGp7Xi8QGV6rJIFiQAqzqCy69vcU6nVMsvens END CERTIFICATE BEGIN CERTIFICATE MIICKjCCAYugAwIBAgIUerDfApmkq0VYychkhlxEnBlIDUcwCgYIKoZIzj0EAwIw HTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MB4XDTE5MDYyNDE4MjQyNloX DTI5MDYyMTE4MjQ1NlowHTEbMBkGA1UEAxMSdmF1bHQta21pcC1kZWZhdWx0MIGb MBAGByqGSM49AgEGBSuBBAAjA4GGAAQBA466Axrrz HWanNe35gPVvB7OE7TWZcc QZw1QSMQ QIQMu5NcdfvZfh68exhe1FiJezKB zeoJWp1Q kqhyh7fsAFUuIcJDO okZYPTmjPh3h5IZLPg5r7Pw1j99rLHhc EXF9wYVy2UeH 2IqGJ cncmVgqczlG8 m36g9OXd6hkofhCjZjBkMA4GA1UdDwEB wQEAwIBBjASBgNVHRMBAf8ECDAGAQH AgEKMB0GA1UdDgQWBBQyE5G6W1I34AGiDFB2S3soKEPTBDAfBgNVHSMEGDAWgBQy E5G6W1I34AGiDFB2S3soKEPTBDAKBggqhkjOPQQDAgOBjAAwgYgCQgGtPVCtgDc1 0SrTsVpEtUMYQKbOWnTKNHZ9h5jSna8n9aY 70Ai3U57q3FL95iIhZRW79PRpp65 d6tWqY51o2hHpwJCAK eE7xpdnqh5H8TqAXKVuSoC0WEsovYCD03c8Ih3jWcZn6N kbz2kXPcAk dE6ncnwhwqNQgsJQGgQzJroH Zzvb END CERTIFICATE certificate BEGIN CERTIFICATE MIICOzCCAZygAwIBAgIUN5V7bLAGu8QIUFxlIugg8fBb eYwCgYIKoZIzj0EAwIw KjEoMCYGA1UEAxMfdmF1bHQta21pcC1kZWZhdWx0LWludGVybWVkaWF0ZTAeFw0x OTA2MjQxODQ3MTdaFw0xOTA2MjUxODQ3NDdaMCAxDjAMBgNVBAsTBWNqVVNJMQ4w DAYDVQQDEwVkdjRZbTCBmzAQBgcqhkjOPQIBBgUrgQQAIwOBhgAEANVsHV8CHYpW CBKbYVEx sLphk67SdWxbII4Sc9Rj1KymApD4gPmS rw0FDMZGFbn1sAfpqMBqMj ylv72o9izbYSALHnYT AaE0NFn4eGWZ2G0p56cVmfXm3ZI959E 3gvZK6X5Jnzm4 FKXTDKGA4pocYec rnYJ5X8sbAJKHvk1OeO o2cwZTAOBgNVHQ8BAf8EBAMCA6gw EwYDVR0lBAwwCgYIKwYBBQUHAwIwHQYDVR0OBBYEFBEIsBo3HiBIg2l2psaQoYkT D1RNMB8GA1UdIwQYMBaAFEdKNL Kz4gVPUcEHxFLnMdycGzwMAoGCCqGSM49BAMC A4GMADCBiAJCAc8DV23DJsHV4fdmbmssu0eDIgNH PrRKdYgqiHptbuVjF2qbILp Z34dJRVN R9B RprZXkYiv7gJ 47KSUKzRZpAkIByMjZqLtcypamJM t O1BSst CWcblb45FIxAmO4hE00Q5wnwXNxNnDHXWiuGdSNmIBjpb9nM5wehQlbkx7HzvPk END CERTIFICATE serial number 317328055225536560033788492808123425026102524390 Tutorial Refer to the KMIP Secrets Engine vault tutorials adp kmip engine guide for a step by step tutorial kmip spec http docs oasis open org kmip spec v1 4 kmip spec v1 4 html kmip ops http docs oasis open org kmip spec v1 4 os kmip spec v1 4 os html Toc490660840 |
vault page title MongoDB Atlas Secrets Engines The MongoDB Atlas secrets engine for Vault generates MongoDB Atlas MongoDB atlas secrets engine Programmatic API Keys dynamically layout docs | ---
layout: docs
page_title: MongoDB Atlas - Secrets Engines
description: |-
The MongoDB Atlas secrets engine for Vault generates MongoDB Atlas
Programmatic API Keys dynamically.
---
# MongoDB atlas secrets engine
The MongoDB Atlas secrets engine generates Programmatic API keys. The created MongoDB Atlas secrets are
time-based and are automatically revoked when the Vault lease expires, unless renewed.
Vault will create a Programmatic API key for each lease that provide appropriate access to the defined MongoDB Atlas
project or organization with appropriate role(s). The MongoDB Atlas Programmatic API Key Public and
Private Keys are returned to the caller. To learn more about Programmatic API Keys visit the
[Programmatic API Keys Doc](https://www.mongodb.com/docs/atlas/configure-api-access/#programmatic-api-keys).
<Note>
The information below relates to the **MongoDB Altas secrets engine**. Refer to the
[MongoDB Atlas **database** secrets engine](/vault/docs/secrets/databases/mongodbatlas)
for information about using the MongoDB Atlas database plugin for the Vault
database secrets engine.
</Note>
## Setup
Most secrets engines must be configured in advance before they can perform their functions. These
steps are usually completed by an operator or configuration management tool.
1. Enable the MongoDB Atlas secrets engine:
```shell-session
$ vault secrets enable mongodbatlas
Success! Enabled the mongodbatlas secrets engine at: mongodbatlas/
```
By default, the secrets engine will mount at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. It's necessary to generate and configure a MongoDB Atlas Programmatic API Key for your organization
or project that has sufficient permissions to allow Vault to create other Programmatic API Keys.
In order to grant Vault programmatic access to an organization or project using only the
[API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/) you need to create a MongoDB Atlas Programmatic API
Key with the appropriate roles if you have not already done so. A Programmatic API Key consists
of a public and private key, so ensure you have both. Regarding roles, the Organization Owner and
Project Owner roles should be sufficient for most needs, however be sure to check what each role
grants in the [MongoDB Atlas Programmatic API Key User Roles documentation](https://www.mongodb.com/docs/atlas/reference/user-roles/).
It is recommended to set an IP Network Access list when creating the key.
For more detailed instructions on how to create a Programmatic API Key in the Atlas UI, including
available roles, visit the [Programmatic API Key documentation](https://www.mongodb.com/docs/atlas/configure-api-access/#programmatic-api-keys).
1. Once you have a MongoDB Atlas Programmatic Key pair, as created in the previous step, Vault can now
be configured to use it with MongoDB Atlas:
```shell-session
$ vault write mongodbatlas/config \
public_key=yhltsvan \
private_key=2c130c23-e6b6-4da8-a93f-a8bf33218830
```
Internally, Vault will connect to MongoDB Atlas using these credentials. As such,
these credentials must be a superset of any policies which might be granted
on API Keys.
<Note>
It is highly recommended to _not_ use your MongoDB Atlas root account credentials.
Generate a dedicated Programmatic API key with appropriate roles instead.
</Note>
## Programmatic API keys
Programmatic API Key credential types use a Vault role to generate a Programmatic API Key at
either the MongoDB Atlas Organization or Project level with the designated role(s) for programmatic access.
Programmatic API Keys:
- Have two parts, a public key and a private key
- Cannot be used to log into Atlas through the user interface
- Must be granted appropriate roles to complete required tasks
- Must belong to one organization, but may be granted access to any number of
projects in that organization.
- May have an IP Network Access list configured and some capabilities may require a
Network Access list to be configured (these are noted in the MongoDB Atlas API
documentation).
Create a Vault role for a MongoDB Atlas Programmatic API Key by mapping appropriate arguments to the
organization or project designated:
- Organization API Key: Set `organization_id` argument with the appropriate
[Organization Level Roles](https://www.mongodb.com/docs/atlas/reference/user-roles/#organization-roles).
- Project API Key: Set `project_id` with the appropriate [Project Level Roles](https://www.mongodb.com/docs/atlas/reference/user-roles/#project-roles).
<Note>
Programmatic API keys can belong to only one Organization but can belong to one or more Projects.
</Note>
Examples:
```shell-session
$ vault write mongodbatlas/roles/test \
organization_id=5b23ff2f96e82130d0aaec13 \
roles=ORG_MEMBER
```
```shell-session
$ vault write mongodbatlas/roles/test \
project_id=5cf5a45a9ccf6400e60981b6 \
roles=GROUP_DATA_ACCESS_READ_ONLY
```
## Programmatic API key network access list
~> **Note:** MongoDB Atlas has deprecated whitelists, and the API will be disabled in June 2021. It is replaced by a
similar access list API which is live now. If you specify CIDR blocks or IP addresses to allow, you need to run **Vault
1.6.3 or greater** to avoid interruption. See [MongoDB Atlas documentation](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Project-IP-Access-List)
for further details.
Programmatic API Key access can and should be limited with a IP Network Access list. In the following example both a CIDR
block and IP address are added to the IP Network Access list for Keys generated with this Vault role:
```shell-session
$ vault write atlas/roles/test \
project_id=5cf5a45a9ccf6400e60981b6 \
roles=GROUP_CLUSTER_MANAGER \
cidr_blocks=192.168.1.3/32 \
ip_addresses=192.168.1.3
```
Verify the created Programmatic API Key Vault role has the added CIDR block and IP address by running:
```shell-session
$ vault read atlas/roles/test
Key Value
--- -----
cidr_blocks [192.168.1.3/32]
ip_addresses [192.168.1.3]
max_ttl 1h
organization_id n/a
roles [GROUP_CLUSTER_MANAGER]
project_id 5cf5a45a9ccf6400e60981b6
roles n/a
ttl 30m
```
## TTL and max TTL
Programmatic API Keys Vault have a time-to-live (TTL) and maximum time-to-live (Max TTL).
When a credential expires it's automatically revoked. You can set the TTL and Max TTL for each role
or by tuning the secrets engine's configuration.
The following creates a Vault role "test" for a Project level Programmatic API key with a 2 hour time-to-live and a
max time-to-live of 5 hours.
```shell-session
$ vault write mongodbatlas/roles/test \
project_id=5cf5a45a9ccf6400e60981b6 \
roles=GROUP_DATA_ACCESS_READ_ONLY \
ttl=2h \
max_ttl=5h
```
You can verify the role that you have created with:
```shell-session
$ vault read mongodbatlas/roles/test
Key Value
--- -----
organization_id 5b71ff2f96e82120d0aaec14
roles [GROUP_DATA_ACCESS_READ_ONLY]
project_id 5cf5a45a9ccf6400e60981b6
roles n/a
ttl 2h0m0s
max_ttl 5h0m0s
```
## Generating credentials
After a user has authenticated to Vault has has sufficient permissions, a read request to the
`creds` endpoint for the role will generate and return new Programmatic API Keys:
```shell-session
$ vault read mongodbatlas/creds/test
Key Value
--- -----
lease_id mongodbatlas/creds/test/0fLBv1c2YDzPlJB1PwsRRKHR
lease_duration 2h
lease_renewable true
description vault-test-1563980947-1318
private_key 905ae89e-6ee8-40rd-ab12-613t8e3fe836
public_key klpruxce
```
## API
The MongoDB Atlas secrets engine has a full HTTP API. Please see the
[MongoDB Atlas secrets engine API docs](/vault/api-docs/secret/mongodbatlas) for more details. | vault | layout docs page title MongoDB Atlas Secrets Engines description The MongoDB Atlas secrets engine for Vault generates MongoDB Atlas Programmatic API Keys dynamically MongoDB atlas secrets engine The MongoDB Atlas secrets engine generates Programmatic API keys The created MongoDB Atlas secrets are time based and are automatically revoked when the Vault lease expires unless renewed Vault will create a Programmatic API key for each lease that provide appropriate access to the defined MongoDB Atlas project or organization with appropriate role s The MongoDB Atlas Programmatic API Key Public and Private Keys are returned to the caller To learn more about Programmatic API Keys visit the Programmatic API Keys Doc https www mongodb com docs atlas configure api access programmatic api keys Note The information below relates to the MongoDB Altas secrets engine Refer to the MongoDB Atlas database secrets engine vault docs secrets databases mongodbatlas for information about using the MongoDB Atlas database plugin for the Vault database secrets engine Note Setup Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the MongoDB Atlas secrets engine shell session vault secrets enable mongodbatlas Success Enabled the mongodbatlas secrets engine at mongodbatlas By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 1 It s necessary to generate and configure a MongoDB Atlas Programmatic API Key for your organization or project that has sufficient permissions to allow Vault to create other Programmatic API Keys In order to grant Vault programmatic access to an organization or project using only the API https www mongodb com docs atlas reference api resources spec v2 you need to create a MongoDB Atlas Programmatic API Key with the appropriate roles if you have not already done so A Programmatic API Key consists of a public and private key so ensure you have both Regarding roles the Organization Owner and Project Owner roles should be sufficient for most needs however be sure to check what each role grants in the MongoDB Atlas Programmatic API Key User Roles documentation https www mongodb com docs atlas reference user roles It is recommended to set an IP Network Access list when creating the key For more detailed instructions on how to create a Programmatic API Key in the Atlas UI including available roles visit the Programmatic API Key documentation https www mongodb com docs atlas configure api access programmatic api keys 1 Once you have a MongoDB Atlas Programmatic Key pair as created in the previous step Vault can now be configured to use it with MongoDB Atlas shell session vault write mongodbatlas config public key yhltsvan private key 2c130c23 e6b6 4da8 a93f a8bf33218830 Internally Vault will connect to MongoDB Atlas using these credentials As such these credentials must be a superset of any policies which might be granted on API Keys Note It is highly recommended to not use your MongoDB Atlas root account credentials Generate a dedicated Programmatic API key with appropriate roles instead Note Programmatic API keys Programmatic API Key credential types use a Vault role to generate a Programmatic API Key at either the MongoDB Atlas Organization or Project level with the designated role s for programmatic access Programmatic API Keys Have two parts a public key and a private key Cannot be used to log into Atlas through the user interface Must be granted appropriate roles to complete required tasks Must belong to one organization but may be granted access to any number of projects in that organization May have an IP Network Access list configured and some capabilities may require a Network Access list to be configured these are noted in the MongoDB Atlas API documentation Create a Vault role for a MongoDB Atlas Programmatic API Key by mapping appropriate arguments to the organization or project designated Organization API Key Set organization id argument with the appropriate Organization Level Roles https www mongodb com docs atlas reference user roles organization roles Project API Key Set project id with the appropriate Project Level Roles https www mongodb com docs atlas reference user roles project roles Note Programmatic API keys can belong to only one Organization but can belong to one or more Projects Note Examples shell session vault write mongodbatlas roles test organization id 5b23ff2f96e82130d0aaec13 roles ORG MEMBER shell session vault write mongodbatlas roles test project id 5cf5a45a9ccf6400e60981b6 roles GROUP DATA ACCESS READ ONLY Programmatic API key network access list Note MongoDB Atlas has deprecated whitelists and the API will be disabled in June 2021 It is replaced by a similar access list API which is live now If you specify CIDR blocks or IP addresses to allow you need to run Vault 1 6 3 or greater to avoid interruption See MongoDB Atlas documentation https www mongodb com docs atlas reference api resources spec v2 tag Project IP Access List for further details Programmatic API Key access can and should be limited with a IP Network Access list In the following example both a CIDR block and IP address are added to the IP Network Access list for Keys generated with this Vault role shell session vault write atlas roles test project id 5cf5a45a9ccf6400e60981b6 roles GROUP CLUSTER MANAGER cidr blocks 192 168 1 3 32 ip addresses 192 168 1 3 Verify the created Programmatic API Key Vault role has the added CIDR block and IP address by running shell session vault read atlas roles test Key Value cidr blocks 192 168 1 3 32 ip addresses 192 168 1 3 max ttl 1h organization id n a roles GROUP CLUSTER MANAGER project id 5cf5a45a9ccf6400e60981b6 roles n a ttl 30m TTL and max TTL Programmatic API Keys Vault have a time to live TTL and maximum time to live Max TTL When a credential expires it s automatically revoked You can set the TTL and Max TTL for each role or by tuning the secrets engine s configuration The following creates a Vault role test for a Project level Programmatic API key with a 2 hour time to live and a max time to live of 5 hours shell session vault write mongodbatlas roles test project id 5cf5a45a9ccf6400e60981b6 roles GROUP DATA ACCESS READ ONLY ttl 2h max ttl 5h You can verify the role that you have created with shell session vault read mongodbatlas roles test Key Value organization id 5b71ff2f96e82120d0aaec14 roles GROUP DATA ACCESS READ ONLY project id 5cf5a45a9ccf6400e60981b6 roles n a ttl 2h0m0s max ttl 5h0m0s Generating credentials After a user has authenticated to Vault has has sufficient permissions a read request to the creds endpoint for the role will generate and return new Programmatic API Keys shell session vault read mongodbatlas creds test Key Value lease id mongodbatlas creds test 0fLBv1c2YDzPlJB1PwsRRKHR lease duration 2h lease renewable true description vault test 1563980947 1318 private key 905ae89e 6ee8 40rd ab12 613t8e3fe836 public key klpruxce API The MongoDB Atlas secrets engine has a full HTTP API Please see the MongoDB Atlas secrets engine API docs vault api docs secret mongodbatlas for more details |
vault Google Cloud KMS secrets engine The Google Cloud KMS secrets engine for Vault interfaces with Google Cloud page title Google Cloud KMS Secrets Engines layout docs KMS for encryption decryption of data and KMS key management through Vault | ---
layout: docs
page_title: Google Cloud KMS - Secrets Engines
description: |-
The Google Cloud KMS secrets engine for Vault interfaces with Google Cloud
KMS for encryption/decryption of data and KMS key management through Vault.
---
# Google Cloud KMS secrets engine
The Google Cloud KMS Vault secrets engine provides encryption and key management
via [Google Cloud KMS][kms]. It supports management of keys, including creation,
rotation, and revocation, as well as encrypting and decrypting data with managed
keys. This enables management of KMS keys through Vault's policies and IAM
system.
## Setup
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. Enable the Google Cloud KMS secrets engine:
```text
$ vault secrets enable gcpkms
Success! Enabled the gcpkms secrets engine at: gcpkms/
```
By default, the secrets engine will mount at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure the secrets engine with account credentials and/or scopes:
```text
$ vault write gcpkms/config \
[email protected]
Success! Data written to: gcpkms/config
```
If you are running Vault from inside [Google Compute Engine][gce] or [Google
Kubernetes Engine][gke], the instance or pod service account can be used in
place of specifying the credentials JSON file. For more information on
authentication, see the [authentication section](#authentication) below.
1. Create a Google Cloud KMS key:
```text
$ vault write gcpkms/keys/my-key \
key_ring=projects/my-project/locations/my-location/keyRings/my-keyring \
rotation_period=72h
```
The `key_ring` parameter is specified in the following format:
```text
projects/<project>/locations/<location>/keyRings/<keyring>
```
where:
- `<project>` - the name of the GCP project (e.g. "my-project")
- `<location>` - the location of the KMS key ring (e.g. "us-east1", "global")
- `<keyring>` - the name of the KMS key ring (e.g. "my-keyring")
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can be used to encrypt, decrypt, and manage keys. The
following sections describe the different ways in which keys can be managed.
### Symmetric Encryption/Decryption
This section describes using a Cloud KMS key for symmetric
encryption/decryption. This is one of the most common types of encryption.
Google Cloud manages the key ring which is used to encrypt and decrypt data.
<table>
<thead>
<tr>
<th>Purpose</th>
<th>Supported Algorithms</th>
</tr>
<tr>
<td valign="top">
<code>encrypt_decrypt</code>
</td>
<td valign="top">
<code>symmetric_encryption</code>
</td>
</tr>
</thead>
</table>
1. Create or use an existing key eligible for symmetric encryption/decryption:
```text
$ vault write gcpkms/keys/my-key \
key_ring=projects/.../my-keyring \
purpose=encrypt_decrypt \
algorithm=symmetric_encryption
```
1. Encrypt plaintext data using the `/encrypt` endpoint with a named key:
```text
$ vault write gcpkms/encrypt/my-key plaintext="hello world"
Key Value
--- -----
ciphertext CiQAuMv0lTiKjrF43Lgr4...
key_version 1
```
Unlike Vault's transit backend, plaintext data does not need to be base64
encoded. The endpoint will automatically convert data.
Note that Vault is not _storing_ this data. The caller is responsible for
storing the resulting ciphertext.
1. Decrypt ciphertext using the `/decrypt` endpoint with a named key:
```text
$ vault write gcpkms/decrypt/my-key ciphertext=CiQAuMv0lTiKjrF43Lgr4...
Key Value
--- -----
plaintext hello world
```
For easier scripting, it is also possible to extract the plaintext directly:
```text
$ vault write -field=plaintext gcpkms/decrypt/my-key ciphertext=CiQAuMv0lTiKjrF43Lgr4...
hello world
```
1. Rotate the underlying encryption key. This will generate a new crypto key
version on Google Cloud KMS and set that version as the active key.
```text
$ vault write -f gcpkms/keys/rotate/my-key
WARNING! The following warnings were returned from Vault:
* The crypto key version was rotated successfully, but it can take up to 2
hours for the new crypto key version to become the primary. In practice, it
is usually much shorter. Be sure to issue a read operation and verify the
key version if you require new data to be encrypted with this key.
Key Value
--- -----
key_version 2
```
As the message says, rotation is not immediate. Depending on a number of
factors, the propagation of the new key can take quite some time. If you
have a need to immediately encrypt data with this new key, query the API to
wait for the key to become the primary. Alternatively, you can specify the
`key_version` parameter to lock to the exact key for use with encryption.
1. Re-encrypt already-encrypted ciphertext to be encrypted with a new version of
the crypto key. Vault will decrypt the value using the appropriate key in the
keyring and then encrypt the resulting plaintext with the newest key in the
keyring.
```text
$ vault write gcpkms/reencrypt/my-key ciphertext=CiQAuMv0lTiKjrF43Lgr4...
Key Value
--- -----
ciphertext CiQAuMv0lZTTozQA/ElqM...
key_version 2
```
This process **does not** reveal the plaintext data. As such, a Vault policy
could grant an untrusted process the ability to re-encrypt ciphertext data,
since the process would not be able to get access to the plaintext data.
1. Trim old key versions by deleting Cloud KMS crypto key versions that are
older than the `min_version` allowed on the key.
```text
$ vault write gcpkms/keys/config/my-key min_version=10
```
Then delete all keys older than version 10. This will make it impossible to
encrypt, decrypt, or sign values with the older key by conventional means.
```text
$ vault write -f gcpkms/keys/trim/my-key
```
1. Delete the key to delete all key versions and Vault's record of the key.
```text
$ vault delete gcpkms/keys/my-key
```
This will make it impossible to encrypt, decrypt, or sign values by
conventional means.
### Asymmetric decryption
This section describes using a Cloud KMS key for asymmetric decryption. In this
model Google Cloud manages the key ring and exposes the public key via an API
endpoint. The public key is used to encrypt data offline and produce ciphertext.
When the plaintext is desired, the user submits the ciphertext to Cloud KMS
which decrypts the value using the corresponding private key.
<table>
<thead>
<tr>
<th>Purpose</th>
<th>Supported Algorithms</th>
</tr>
<tr>
<td valign="top">
<code>asymmetric_decrypt</code>
</td>
<td valign="top">
<code>rsa_decrypt_oaep_2048_sha256</code>
<br />
<code>rsa_decrypt_oaep_3072_sha256</code>
<br />
<code>rsa_decrypt_oaep_4096_sha256</code>
</td>
</tr>
</thead>
</table>
1. Create or use an existing key eligible for symmetric encryption/decryption:
```text
$ vault write gcpkms/keys/my-key \
key_ring=projects/.../my-keyring \
purpose=asymmetric_decrypt \
algorithm=rsa_decrypt_oaep_4096_sha256
```
1. Retrieve the public key from Cloud KMS:
```text
$ gcloud kms keys versions get-public-key <crypto-key-version> \
--location <location> \
--keyring <key-ring> \
--key <key> \
--output-file ~/mykey.pub
```
1. Encrypt plaintext data with the public key. Note this varies widely between
programming languages. The following example uses OpenSSL, but you can use your
language's built-ins as well.
```text
$ openssl pkeyutl -in ~/my-secret-file \
-encrypt -pubin \
-inkey ~/mykey.pub \
-pkeyopt rsa_padding_mode:oaep \
-pkeyopt rsa_oaep_md:sha256 \
-pkeyopt rsa_mgf1_md:sha256
```
Note that this encryption happens offline (meaning outside of Vault), and
the encryption is happening with a _public_ key. Only Cloud KMS has the
corresponding _private_ key.
1. Decrypt ciphertext using the `/decrypt` endpoint with a named key:
```text
$ vault write gcpkms/decrypt/my-key key_version=1 ciphertext=CiQAuMv0lTiKjrF43Lgr4...
Key Value
--- -----
plaintext hello world
```
### Asymmetric signing
This section describes using a Cloud KMS key for asymmetric signing. In this
model Google Cloud manages the key ring and exposes the public key via an API
endpoint. A message or digest is signed with the corresponding private key, and
can be verified by anyone with the corresponding public key.
<table>
<thead>
<tr>
<th>Purpose</th>
<th>Supported Algorithms</th>
</tr>
<tr>
<td valign="top">
<code>asymmetric_sign</code>
</td>
<td valign="top">
<code>rsa_sign_pss_2048_sha256</code>
<br />
<code>rsa_sign_pss_3072_sha256</code>
<br />
<code>rsa_sign_pss_4096_sha256</code>
<br />
<code>rsa_sign_pkcs1_2048_sha256</code>
<br />
<code>rsa_sign_pkcs1_3072_sha256</code>
<br />
<code>rsa_sign_pkcs1_4096_sha256</code>
<br />
<code>ec_sign_p256_sha256</code>
<br />
<code>ec_sign_p384_sha384</code>
</td>
</tr>
</thead>
</table>
1. Create or use an existing key eligible for symmetric encryption/decryption:
```text
$ vault write gcpkms/keys/my-key \
key_ring=projects/.../my-keyring \
purpose=asymmetric_sign \
algorithm=ec_sign_p384_sha384
```
1. Calculate the base64-encoded binary digest. Use the hashing algorithm that
corresponds to they key type:
```text
$ export DIGEST=$(openssl dgst -sha384 -binary /my/file | base64)
```
Ask Cloud KMS to sign the digest:
```text
$ vault write gcpkms/sign/my-key key_version=1 digest=$DIGEST
Key Value
--- -----
signature MGYCMQDbOS2462SKMsGdh2GQ...
```
1. Verify the signature of the digest:
```text
$ vault write gcpkms/verify/my-key key_version=1 digest=$DIGEST signature=$SIGNATURE
Key Value
--- -----
valid true
```
Note: it is also possible to verify this signature without Vault. Download
the public key from Cloud KMS, and use a tool like OpenSSL or your
programming language primitives to verify the signature.
## Authentication
The Google Cloud KMS Vault secrets backend uses the official Google Cloud Golang
SDK. This means it supports the common ways of [providing credentials to Google
Cloud][cloud-creds]. In addition to specifying `credentials` directly via Vault
configuration, you can also get configuration from the following values **on the
Vault server**:
1. The environment variables `GOOGLE_APPLICATION_CREDENTIALS`. This is specified
as the **path** to a Google Cloud credentials file, typically for a service
account. If this environment variable is present, the resulting credentials are
used. If the credentials are invalid, an error is returned.
1. Default instance credentials. When no environment variable is present, the
default service account credentials are used. This is useful when running Vault
on [Google Compute Engine][gce] or [Google Kubernetes Engine][gke]
For more information on service accounts, please see the [Google Cloud Service
Accounts documentation][service-accounts].
To use this secrets engine, the service account must have the following
minimum scope(s):
```text
https://www.googleapis.com/auth/kms
```
### Required permissions
The credentials given to Vault must have the following role:
```text
roles/cloudkms.admin
```
If Vault will not be creating keys, you can reduce the permissions. For example,
to create keys out of band and have Vault manage the encryption/decryption, you
only need the following permissions:
```text
roles/cloudkms.cryptoKeyEncrypterDecrypter
```
To sign and verify, you only need the following permissions:
```text
roles/cloudkms.signerVerifier
```
For more information, please see the [Google Cloud KMS IAM documentation][kms-iam].
## FAQ
**How is this different than Vault's transit secrets engine?**<br />
Vault's [transit][vault-transit] secrets engine uses in-memory keys to
encrypt/decrypt keys. In general it will be faster and more performant. However,
users who need physical, off-site, or out-of-band key management can use the
[Google Cloud KMS][kms] secrets engine to get those benefits while leveraging
Vault's policy and identity system.
**Can Vault use an existing KMS key?**<br />
You can use the `/register` endpoint to configure Vault to talk to an existing
Google Cloud KMS key. As long as the IAM permissions are correct, Vault will be
able to encrypt/decrypt data and rotate the key. See the [api][api] for more
information.
**Can this be used with a hardware key like an HSM?**<br />
Yes! You can set `protection_level` to "hsm" when creating a key, or use an
existing Cloud KMS key that is backed by an HSM.
**How much does this cost?**<br />
The plugin is free and open source. KMS costs vary by key type and the number of
operations. Please see the [Cloud KMS pricing page][kms-pricing] for more
details.
## Help & support
The Google Cloud KMS Vault secrets engine is written as an external Vault
plugin. The code lives outside the main Vault repository. It is automatically
bundled with Vault releases, but the code is managed separately.
Please report issues, add feature requests, and submit contributions to the
[vault-plugin-secrets-gcpkms repo on GitHub][repo].
## API
The Google Cloud KMS secrets engine has a full HTTP API. Please see the
[Google Cloud KMS secrets engine API docs][api] for more details.
[api]: /vault/api-docs/secret/gcpkms
[cloud-creds]: https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application
[gce]: https://cloud.google.com/compute/
[gke]: https://cloud.google.com/kubernetes-engine/
[kms]: https://cloud.google.com/kms
[kms-iam]: https://cloud.google.com/kms/docs/reference/permissions-and-roles
[kms-pricing]: https://cloud.google.com/kms/pricing
[repo]: https://github.com/hashicorp/vault-plugin-secrets-gcpkms
[service-accounts]: https://cloud.google.com/compute/docs/access/service-accounts
[vault-transit]: /vault/docs/secrets/transit | vault | layout docs page title Google Cloud KMS Secrets Engines description The Google Cloud KMS secrets engine for Vault interfaces with Google Cloud KMS for encryption decryption of data and KMS key management through Vault Google Cloud KMS secrets engine The Google Cloud KMS Vault secrets engine provides encryption and key management via Google Cloud KMS kms It supports management of keys including creation rotation and revocation as well as encrypting and decrypting data with managed keys This enables management of KMS keys through Vault s policies and IAM system Setup Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the Google Cloud KMS secrets engine text vault secrets enable gcpkms Success Enabled the gcpkms secrets engine at gcpkms By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure the secrets engine with account credentials and or scopes text vault write gcpkms config credentials my credentials json Success Data written to gcpkms config If you are running Vault from inside Google Compute Engine gce or Google Kubernetes Engine gke the instance or pod service account can be used in place of specifying the credentials JSON file For more information on authentication see the authentication section authentication below 1 Create a Google Cloud KMS key text vault write gcpkms keys my key key ring projects my project locations my location keyRings my keyring rotation period 72h The key ring parameter is specified in the following format text projects project locations location keyRings keyring where project the name of the GCP project e g my project location the location of the KMS key ring e g us east1 global keyring the name of the KMS key ring e g my keyring Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can be used to encrypt decrypt and manage keys The following sections describe the different ways in which keys can be managed Symmetric Encryption Decryption This section describes using a Cloud KMS key for symmetric encryption decryption This is one of the most common types of encryption Google Cloud manages the key ring which is used to encrypt and decrypt data table thead tr th Purpose th th Supported Algorithms th tr tr td valign top code encrypt decrypt code td td valign top code symmetric encryption code td tr thead table 1 Create or use an existing key eligible for symmetric encryption decryption text vault write gcpkms keys my key key ring projects my keyring purpose encrypt decrypt algorithm symmetric encryption 1 Encrypt plaintext data using the encrypt endpoint with a named key text vault write gcpkms encrypt my key plaintext hello world Key Value ciphertext CiQAuMv0lTiKjrF43Lgr4 key version 1 Unlike Vault s transit backend plaintext data does not need to be base64 encoded The endpoint will automatically convert data Note that Vault is not storing this data The caller is responsible for storing the resulting ciphertext 1 Decrypt ciphertext using the decrypt endpoint with a named key text vault write gcpkms decrypt my key ciphertext CiQAuMv0lTiKjrF43Lgr4 Key Value plaintext hello world For easier scripting it is also possible to extract the plaintext directly text vault write field plaintext gcpkms decrypt my key ciphertext CiQAuMv0lTiKjrF43Lgr4 hello world 1 Rotate the underlying encryption key This will generate a new crypto key version on Google Cloud KMS and set that version as the active key text vault write f gcpkms keys rotate my key WARNING The following warnings were returned from Vault The crypto key version was rotated successfully but it can take up to 2 hours for the new crypto key version to become the primary In practice it is usually much shorter Be sure to issue a read operation and verify the key version if you require new data to be encrypted with this key Key Value key version 2 As the message says rotation is not immediate Depending on a number of factors the propagation of the new key can take quite some time If you have a need to immediately encrypt data with this new key query the API to wait for the key to become the primary Alternatively you can specify the key version parameter to lock to the exact key for use with encryption 1 Re encrypt already encrypted ciphertext to be encrypted with a new version of the crypto key Vault will decrypt the value using the appropriate key in the keyring and then encrypt the resulting plaintext with the newest key in the keyring text vault write gcpkms reencrypt my key ciphertext CiQAuMv0lTiKjrF43Lgr4 Key Value ciphertext CiQAuMv0lZTTozQA ElqM key version 2 This process does not reveal the plaintext data As such a Vault policy could grant an untrusted process the ability to re encrypt ciphertext data since the process would not be able to get access to the plaintext data 1 Trim old key versions by deleting Cloud KMS crypto key versions that are older than the min version allowed on the key text vault write gcpkms keys config my key min version 10 Then delete all keys older than version 10 This will make it impossible to encrypt decrypt or sign values with the older key by conventional means text vault write f gcpkms keys trim my key 1 Delete the key to delete all key versions and Vault s record of the key text vault delete gcpkms keys my key This will make it impossible to encrypt decrypt or sign values by conventional means Asymmetric decryption This section describes using a Cloud KMS key for asymmetric decryption In this model Google Cloud manages the key ring and exposes the public key via an API endpoint The public key is used to encrypt data offline and produce ciphertext When the plaintext is desired the user submits the ciphertext to Cloud KMS which decrypts the value using the corresponding private key table thead tr th Purpose th th Supported Algorithms th tr tr td valign top code asymmetric decrypt code td td valign top code rsa decrypt oaep 2048 sha256 code br code rsa decrypt oaep 3072 sha256 code br code rsa decrypt oaep 4096 sha256 code td tr thead table 1 Create or use an existing key eligible for symmetric encryption decryption text vault write gcpkms keys my key key ring projects my keyring purpose asymmetric decrypt algorithm rsa decrypt oaep 4096 sha256 1 Retrieve the public key from Cloud KMS text gcloud kms keys versions get public key crypto key version location location keyring key ring key key output file mykey pub 1 Encrypt plaintext data with the public key Note this varies widely between programming languages The following example uses OpenSSL but you can use your language s built ins as well text openssl pkeyutl in my secret file encrypt pubin inkey mykey pub pkeyopt rsa padding mode oaep pkeyopt rsa oaep md sha256 pkeyopt rsa mgf1 md sha256 Note that this encryption happens offline meaning outside of Vault and the encryption is happening with a public key Only Cloud KMS has the corresponding private key 1 Decrypt ciphertext using the decrypt endpoint with a named key text vault write gcpkms decrypt my key key version 1 ciphertext CiQAuMv0lTiKjrF43Lgr4 Key Value plaintext hello world Asymmetric signing This section describes using a Cloud KMS key for asymmetric signing In this model Google Cloud manages the key ring and exposes the public key via an API endpoint A message or digest is signed with the corresponding private key and can be verified by anyone with the corresponding public key table thead tr th Purpose th th Supported Algorithms th tr tr td valign top code asymmetric sign code td td valign top code rsa sign pss 2048 sha256 code br code rsa sign pss 3072 sha256 code br code rsa sign pss 4096 sha256 code br code rsa sign pkcs1 2048 sha256 code br code rsa sign pkcs1 3072 sha256 code br code rsa sign pkcs1 4096 sha256 code br code ec sign p256 sha256 code br code ec sign p384 sha384 code td tr thead table 1 Create or use an existing key eligible for symmetric encryption decryption text vault write gcpkms keys my key key ring projects my keyring purpose asymmetric sign algorithm ec sign p384 sha384 1 Calculate the base64 encoded binary digest Use the hashing algorithm that corresponds to they key type text export DIGEST openssl dgst sha384 binary my file base64 Ask Cloud KMS to sign the digest text vault write gcpkms sign my key key version 1 digest DIGEST Key Value signature MGYCMQDbOS2462SKMsGdh2GQ 1 Verify the signature of the digest text vault write gcpkms verify my key key version 1 digest DIGEST signature SIGNATURE Key Value valid true Note it is also possible to verify this signature without Vault Download the public key from Cloud KMS and use a tool like OpenSSL or your programming language primitives to verify the signature Authentication The Google Cloud KMS Vault secrets backend uses the official Google Cloud Golang SDK This means it supports the common ways of providing credentials to Google Cloud cloud creds In addition to specifying credentials directly via Vault configuration you can also get configuration from the following values on the Vault server 1 The environment variables GOOGLE APPLICATION CREDENTIALS This is specified as the path to a Google Cloud credentials file typically for a service account If this environment variable is present the resulting credentials are used If the credentials are invalid an error is returned 1 Default instance credentials When no environment variable is present the default service account credentials are used This is useful when running Vault on Google Compute Engine gce or Google Kubernetes Engine gke For more information on service accounts please see the Google Cloud Service Accounts documentation service accounts To use this secrets engine the service account must have the following minimum scope s text https www googleapis com auth kms Required permissions The credentials given to Vault must have the following role text roles cloudkms admin If Vault will not be creating keys you can reduce the permissions For example to create keys out of band and have Vault manage the encryption decryption you only need the following permissions text roles cloudkms cryptoKeyEncrypterDecrypter To sign and verify you only need the following permissions text roles cloudkms signerVerifier For more information please see the Google Cloud KMS IAM documentation kms iam FAQ How is this different than Vault s transit secrets engine br Vault s transit vault transit secrets engine uses in memory keys to encrypt decrypt keys In general it will be faster and more performant However users who need physical off site or out of band key management can use the Google Cloud KMS kms secrets engine to get those benefits while leveraging Vault s policy and identity system Can Vault use an existing KMS key br You can use the register endpoint to configure Vault to talk to an existing Google Cloud KMS key As long as the IAM permissions are correct Vault will be able to encrypt decrypt data and rotate the key See the api api for more information Can this be used with a hardware key like an HSM br Yes You can set protection level to hsm when creating a key or use an existing Cloud KMS key that is backed by an HSM How much does this cost br The plugin is free and open source KMS costs vary by key type and the number of operations Please see the Cloud KMS pricing page kms pricing for more details Help amp support The Google Cloud KMS Vault secrets engine is written as an external Vault plugin The code lives outside the main Vault repository It is automatically bundled with Vault releases but the code is managed separately Please report issues add feature requests and submit contributions to the vault plugin secrets gcpkms repo on GitHub repo API The Google Cloud KMS secrets engine has a full HTTP API Please see the Google Cloud KMS secrets engine API docs api for more details api vault api docs secret gcpkms cloud creds https cloud google com docs authentication production providing credentials to your application gce https cloud google com compute gke https cloud google com kubernetes engine kms https cloud google com kms kms iam https cloud google com kms docs reference permissions and roles kms pricing https cloud google com kms pricing repo https github com hashicorp vault plugin secrets gcpkms service accounts https cloud google com compute docs access service accounts vault transit vault docs secrets transit |
vault page title KMIP Profiles Support These profiles define a set of normative constraints for employing KMIP within a particular and authentication methods within specific contexts of KMIP server and client interaction layout docs environment or context of use The KMIP profiles define the use of KMIP objects attributes operations message elements | ---
layout: docs
page_title: KMIP - Profiles Support
description: |-
The KMIP profiles define the use of KMIP objects, attributes, operations, message elements
and authentication methods within specific contexts of KMIP server and client interaction.
These profiles define a set of normative constraints for employing KMIP within a particular
environment or context of use.
---
# KMIP profiles version 1.4
This document specifies conformance clauses in accordance with the OASIS TC Process ([TC-PROC section 2.18 paragraph 8a][tc-proc-2.18] )
for the KMIP Specification ([KMIP-SPEC 12.1 and 12.2][kmip-spec]) for a KMIP server or KMIP client through profiles that define the
use of KMIP objects, attributes, operations, message elements and authentication methods within specific contexts of
KMIP server and client interaction.
Vault implements version 1.4 of the following Key Management Interoperability Protocol Profiles:
## [Baseline server][baseline-server]
1. Supports the following objects:
| Object | Supported |
| ----------------------------------------------------------------------- | :-------: |
| Attribute [KMIP-SPEC 2.1.1][kmip-spec-2.1.1] | ✅ |
| Credential [KMIP-SPEC 2.1.2][kmip-spec-2.1.2] | ✅ |
| Key Block [KMIP-SPEC 2.1.3][kmip-spec-2.1.3] | ✅ |
| Key Value [KMIP-SPEC 2.1.4][kmip-spec-2.1.4] | ✅ |
| Template-Attribute Structure [KMIP-SPEC 2.1.8][kmip-spec-2.1.8] | ✅ |
| Extension Information [KMIP-SPEC 2.1.9][kmip-spec-2.1.9] | ✅ |
| Profile Information [KMIP-SPEC 2.1.19][kmip-spec-2.1.19] | ✅ |
| Validation Information [KMIP-SPEC 2.1.20][kmip-spec-2.1.20] | ✅ |
| Capability Information [KMIP-SPEC 2.1.21][kmip-spec-2.1.21] | ✅ |
2. Supports the following subsets of attributes:
| Attribute | Supported | Notes |
| -----------------------------------------------------------------------| :-------: | :----: |
| Unique Identifier [KMIP-SPEC 3.1][kmip-spec-3.1] | ✅ | |
| Name [KMIP-SPEC 3.2][kmip-spec-3.2] | ✅ | |
| Object Type [KMIP-SPEC 3.3][kmip-spec-3.3] | ✅ | |
| Cryptographic Algorithm [KMIP-SPEC 3.4][kmip-spec-3.4] | ✅ | |
| Cryptographic Length [KMIP-SPEC 3.5][kmip-spec-3.5] | ✅ | |
| Cryptographic Parameters [KMIP-SPEC 3.6][kmip-spec-3.6] | ✅ | |
| Digest [KMIP-SPEC 3.17][kmip-spec-3.17] | ✅ | |
| Cryptographic Usage Mask [KMIP-SPEC 3.19][kmip-spec-3.19] | ✅ | |
| State [KMIP-SPEC 3.22][kmip-spec-3.22] | ✅ | |
| Initial Date [KMIP-SPEC 3.23][kmip-spec-3.23] | ✅ | |
| Process Start Date [KMIP-SPEC 3.25][kmip-spec-3.25] | ✅ | Vault 1.11 |
| Protect Stop Date [KMIP-SPEC 3.26][kmip-spec-3.26] | ✅ | Vault 1.11 |
| Activation Date [KMIP-SPEC 3.24][kmip-spec-3.24] | ✅ | |
| Deactivation Date [KMIP-SPEC 3.27][kmip-spec-3.27] | ✅ | |
| Compromise Occurrence Date [KMIP-SPEC 3.29][kmip-spec-3.29] | ✅ | |
| Compromise Date [KMIP-SPEC 3.30][kmip-spec-3.30] | ✅ | |
| Revocation Reason [KMIP-SPEC 3.31][kmip-spec-3.31] | ✅ | |
| Object Group [KMIP-SPEC 3.33][kmip-spec-3.33] | ✅ | |
| Fresh [KMIP-SPEC 3.34][kmip-spec-3.34] | ✅ | |
| Link [KMIP-SPEC 3.35][kmip-spec-3.35] | ✅ | |
| Last Change Date [KMIP-SPEC 3.38][kmip-spec-3.38] | ✅ | |
| Alternative Name [KMIP-SPEC 3.40][kmip-spec-3.40] | ✅ | Vault 1.12 |
| Key Value Present [KMIP-SPEC 3.41][kmip-spec-3.41] | ✅ | Vault 1.12 |
| Key Value Location [KMIP-SPEC 3.42][kmip-spec-3.42] | 🔴 | |
| Original Creation Date [KMIP-SPEC 3.43][kmip-spec-3.43] | ✅ | |
| Random Number Generator [KMIP-SPEC 3.44][kmip-spec-3.44] | ✅ | |
| Description [KMIP-SPEC 3.46][kmip-spec-3.46] | ✅ | |
| Comment [KMIP-SPEC 3.47][kmip-spec-3.47] | ✅ | |
| Sensitive [KMIP-SPEC 3.48][kmip-spec-3.48] | ✅ | |
| Always Sensitive [KMIP-SPEC 3.49][kmip-spec-3.49] | ✅ | |
| Extractable [KMIP-SPEC 3.50][kmip-spec-3.50] | ✅ | |
| Never Extractable [KMIP-SPEC 3.51][kmip-spec-3.51] | ✅ | |
3. Supports the following client-to-server operations:
| Operation | Supported | Notes |
| ------------------------------------------------------| :--------:|:-----:|
| Locate [KMIP-SPEC 4.9][kmip-spec-4.9] | ✅ | Vault version 1.11 supports attributes Activation Date, Application Specific Information, Cryptographic Algorithm, Cryptographic Length, Name, Object Type, Original Creation Date, and State. <br/> Vault version 1.12 supports all profile attributes except for Key Value Location. |
| Check [KMIP-SPEC 4.10][kmip-spec-4.10] | 🔴 | |
| Get [KMIP-SPEC 4.11][kmip-spec-4.11] | ✅ | |
| Get Attributes [KMIP-SPEC 4.12][kmip-spec-4.12] | ✅ | |
| Get Attribute List [KMIP-SPEC 4.13][kmip-spec-4.13] | ✅ | |
| Add Attribute [KMIP-SPEC 4.14][kmip-spec-4.14] | ✅ | |
| Modify Attribute [KMIP-SPEC 4.15][kmip-spec-4.15] | ✅ | Vault 1.12 |
| Delete Attribute [KMIP-SPEC 4.16][kmip-spec-4.16] | ✅ | Vault 1.12 |
| Activate [KMIP-SPEC 4.19][kmip-spec-4.19] | ✅ | |
| Revoke [KMIP-SPEC 4.20][kmip-spec-4.20] | ✅ | |
| Destroy [KMIP-SPEC 4.21][kmip-spec-4.21] | ✅ | |
| Query [KMIP-SPEC 4.25][kmip-spec-4.25] | ✅ | Vault 1.11 |
| Discover Versions [KMIP-SPEC 4.26][kmip-spec-4.26] | ✅ | |
4.Supports the following message contents:
| Message Content | Supported |
| -----------------------------------------------------------------| :--------:|
| Protocol Version [KMIP-SPEC 6.1][kmip-spec-6.1] | ✅ |
| Operation [KMIP-SPEC 6.2][kmip-spec-6.2] | ✅ |
| Maximum Response Size [KMIP-SPEC 6.3][kmip-spec-6.3] | ✅ |
| Unique Batch Item ID [KMIP-SPEC 6.4][kmip-spec-6.4] | ✅ |
| Time Stamp [KMIP-SPEC 6.5][kmip-spec-6.5] | ✅ |
| Asynchronous Indicator [KMIP-SPEC 6.7][kmip-spec-6.7] | ✅ |
| Result Status [KMIP-SPEC 6.9][kmip-spec-6.9] | ✅ |
| Result Reason [KMIP-SPEC 6.10][kmip-spec-6.10] | ✅ |
| Batch Order Option [KMIP-SPEC 6.12][kmip-spec-6.12] | ✅ |
| Batch Error Continuation Option [KMIP-SPEC 6.13][kmip-spec-6.13] | ✅ |
| Batch Count [KMIP-SPEC 6.14][kmip-spec-6.14] | ✅ |
| Batch Item [KMIP-SPEC 6.15][kmip-spec-6.15] | ✅ |
| Attestation Capable Indicator [KMIP-SPEC 6.17][kmip-spec-6.17] | ✅ |
| Client Correlation Value [KMIP-SPEC 6.18][kmip-spec-6.18] | ✅ |
| Server Correlation Value [KMIP-SPEC 6.19][kmip-spec-6.19] | ✅ |
| Message Extension [KMIP-SPEC 6.16][kmip-spec-6.16] | ✅ |
5. Supports the ID Placeholder [KMIP-SPEC 4][kmip-spec-4]
6. Supports Message Format [KMIP-SPEC 7][kmip-spec-7]
7. Supports Authentication [KMIP-SPEC 8][kmip-spec-8]
8. Supports the TTLV encoding [KMIP-SPEC 9.1][kmip-spec-9.1]
9. Supports the transport requirements [KMIP-SPEC 10][kmip-spec-10]
10. Supports Error Handling [KMIP-SPEC 11][kmip-spec-11] for any supported object, attribute, or operation
11. Optionally supports any clause within [KMIP-SPEC][kmip-spec] that is not listed above
12. Optionally supports extensions outside the scope of this standard (e.g., vendor extensions, conformance clauses) that do not contradict any KMIP requirements - We do not have any extensions
## [Symmetric key lifecycle server][lifecycle-server]
1. SHALL conform to the [Baseline Server][baseline-server]
2. Supports the following objects:
| Object | Supported |
| -----------------------------------------------------------------------| :----- --:|
| Symmetric Key [KMIP-SPEC 2.2.2][kmip-spec-2.2.2] | ✅ |
| Key Format Type [KMIP-SPEC 9.1.3.2.3][kmip-spec-9.1.3.2.3] | ✅ |
3. Supports the following subsets of attributes:
| Attribute | Supported | Notes |
| -----------------------------------------------------------------------| :-------: | :---: |
| Cryptographic Algorithm [KMIP-SPEC 3.4][kmip-spec-3.4] | ✅ | |
| Object Type [KMIP-SPEC 3.3][kmip-spec-3.3] | ✅ | |
| Process Start Date [KMIP-SPEC 3.25][kmip-spec-3.25] | ✅ | Vault 1.11 |
| Protect Stop Date [KMIP-SPEC 3.26][kmip-spec-3.26] | ✅ | Vault 1.11 |
4. Supports the following client-to-server operations:
| Operation | Supported |
| ------------------------------------------------------| :--------:|
| Create [KMIP-SPEC 4.1][kmip-spec-4.1] | ✅ |
5. Supports the following message encoding:
| Message Encoding | Supported | Notes |
| -------------------------------------------------------------------------------------| :--------:|:-----:|
| Cryptographic Algorithm [KMIP-SPEC 9.1.3.2.13][kmip-spec-9.1.3.2.13] with values: | | |
| i. 3DES | ✅ | Vault 1.12 |
| ii. AES | ✅ | |
| Object Type [KMIP-SPEC 9.1.3.2.12][kmip-spec-9.1.3.2.12] with value: | | |
| i. Symmetric Key | ✅ | |
| Key Format Type [KMIP-SPEC 9.1.3.2.3][kmip-spec-9.1.3.2.3] with value: | | |
| i. Raw | ✅ | |
| ii. Transparent Symmetric Key | 🔴 | |
6. MAY support any clause within [KMIP-SPEC][kmip-spec] provided it does not conflict with any other clause within the section [Symmetric Key Lifecycle Server][lifecycle-server]
7. MAY support extensions outside the scope of this standard (e.g., vendor extensions, conformance clauses) that do not contradict any KMIP requirements.
## [Basic cryptographic server][basic-cryptographic-server]
1. SHALL conform to the [Baseline Server][baseline-server]
2. Supports the following client-to-server operations:
| Operation | Supported | Notes |
| ------------------------------------------------------| :--------:| --------|
| Encrypt [KMIP-SPEC 4.29][kmip-spec-4.29] | ✅ | Vault 1.11 <br/> Supported for AES, unsupported for 3DES: <br/><br/> Supported Block Cipher Modes: <br/> <ol> <li> GCM </li> <li> CBC </li> <li> CFB </li> <li> CTR </li> <li> ECB </li> <li> OFB </li> </ol> <br/> Stream operations are supported except for GCM block cipher mode. <br/><br/> Supported padding methods: <br/> <ol> <li> None </li> <li> PKCS5 </li> </ol> |
| Decypt [KMIP-SPEC 4.30][kmip-spec-4.30] | ✅ | Vault 1.11 <br/> Supported for AES, unsupported for 3DES: <br/><br/> Supported Block Cipher Modes: <br/> <ol> <li> GCM </li> <li> CBC </li> <li> CFB </li> <li> CTR </li> <li> ECB </li> <li> OFB </li> </ol> <br/> Stream operations are supported except for GCM block cipher mode. <br/><br/> Supported padding methods: <br/> <ol> <li> None </li> <li> PKCS5 </li> </ol> | |
3. MAY support any clause within [KMIP-SPEC][kmip-spec] provided it does not conflict with any other clause within the section [Basic Cryptographic Server][basic-cryptographic-server]
4. MAY support extensions outside the scope of this standard (e.g., vendor extensions, conformance clauses) that do not contradict any KMIP requirements.
## [Asymmetric key lifecycle server][asymmetric-key-lifecycle-server]
1. SHALL conform to the [Baseline Server][baseline-server]
2. Supports the following objects:
| Object | Supported |
| -----------------------------------------------------------------------| :----- --:|
| Symmetric Key [KMIP-SPEC 2.2.2][kmip-spec-2.2.2] | ✅ |
| Key Format Type [KMIP-SPEC 9.1.3.2.3][kmip-spec-9.1.3.2.3] | ✅ |
3. Supports the following objects:
| Object | Supported | Notes |
| --------------------------------------------------------------------| :-------: | :---: |
| Public Key [KMIP-SPEC 2.2.3][kmip-spec-2.2.3] | ✅ | Vault 1.13 |
| Private Key [KMIP-SPEC 2.2.4][kmip-spec-2.2.4] | ✅ | Vault 1.13 |
| Process Start Date [KMIP-SPEC 3.25][kmip-spec-3.25] | ✅ | Vault 1.11 |
| Key Format Type [KMIP-SPEC 9.1.3.2.3][kmip-spec-9.1.3.2.3] | ✅ | |
4. Supports the following attributes:
| Attribute | Supported | Notes |
| -----------------------------------------------------------------------| :-------: | :---: |
| Cryptographic Algorithm [KMIP-SPEC 3.4][kmip-spec-3.4] | ✅ | |
| Object Type [KMIP-SPEC 3.3][kmip-spec-3.3] | ✅ | |
| Process Start Date [KMIP-SPEC 3.25][kmip-spec-3.25] | ✅ | Vault 1.11 |
| Protect Stop Date [KMIP-SPEC 3.26][kmip-spec-3.26] | ✅ | Vault 1.11 |
5. Supports the following message encoding:
| Message Encoding | Supported | Notes |
| -------------------------------------------------------------------------------------| :--------:|:-----:|
| Cryptographic Algorithm [KMIP-SPEC 9.1.3.2.13][kmip-spec-9.1.3.2.13] with values: | | |
| i. RSA | ✅ | Vault 1.13 |
| Object Type [KMIP-SPEC 9.1.3.2.12][kmip-spec-9.1.3.2.12] with value: | | |
| i. Public Key | ✅ | Vault 1.13 |
| ii. Private Key | ✅ | Vault 1.13 |
| Key Format Type [KMIP-SPEC 9.1.3.2.3][kmip-spec-9.1.3.2.3] with value: | | |
| i. PKCS#1 | ✅ | Vault 1.13 <br/> Supported for Private and Public Keys|
| ii. PKCS#8 | ✅ | Vault 1.13 <br/> Supported for Private Key|
| iii. Transparent RSA Public Key | ✅ | Vault 1.13 |
| iv. Transparent RSA Private Key | ✅ | Vault 1.13 |
| v. X.509 | ✅ | Vault 1.13 <br/> Supported for Public Key|
6. MAY support any clause within [KMIP-SPEC][kmip-spec] provided it does not conflict with any other clause within the section [Symmetric Key Lifecycle Server][lifecycle-server]
7. MAY support extensions outside the scope of this standard (e.g., vendor extensions, conformance clauses) that do not contradict any KMIP requirements.
## [Advanced cryptographic server][advanced-cryptographic-server]
1. SHALL conform to the [Baseline Server][baseline-server]
2. Supports the following client-to-server operations:
| Operation | Supported | Notes |
| ------------------------------------------------------| :--------:| --------|
| Encrypt [KMIP-SPEC 4.29][kmip-spec-4.29] | ✅ | Vault 1.11 <br/> [See Basic Cryptographic Server](#basic-cryptographic-server) <br/><br/> Vault 1.13 <br/> Supported for RSA Asymmetric Keys: <br/><br/> Supported padding methods: <br/> <ol> <li> OAEP </li> <li> PKCS1v15 </li> </ol> <br/> Streaming operations are not supported. |
| Decypt [KMIP-SPEC 4.30][kmip-spec-4.30] | ✅ | Vault 1.11 <br/> [See Basic Cryptographic Server](#basic-cryptographic-server) <br/><br/> Vault 1.13 <br/> Supported for RSA Asymmetric Keys: <br/><br/> Supported padding methods: <br/> <ol> <li> OAEP </li> <li> PKCS1v15 </li> </ol> <br/> Streaming operations are not supported. |
| Sign [KMIP-SPEC 4.31][kmip-spec-4.31] | ✅ | Vault 1.13 <br/> Supported for RSA Asymmetric Keys: <br/><br/> Supported padding methods: <br/> <ol> <li> PSS </li> <li> PKCS1v15 </li> </ol> <br/><br/> The supported hashing algorithms with PSS are: <br/> <ol> <li> SHA224 </li> <li> SHA256 </li> <li> SHA384 </li> <li> SHA512 </li> <li> RIPEMD160 </li> <li> SHA512_224 </li> <li> SHA512_256 </li> <li> SHA3_224 </li> <li> SHA3_256 </li> <li> SHA3_384 </li> <li> SHA3_512 </li> </ol> <br/> The supported hashing algorithms with PKCS1v15 are: <br/> <ol> <li> SHA224 </li> <li> SHA256 </li> <li> SHA384 </li> <li> SHA512 </li> <li> RIPEMD160 </li> </ol> <br/> Streaming operations are supported.|
| Signature Verify [KMIP-SPEC 4.32][kmip-spec-4.32] | ✅ | Vault 1.13 <br/> Supported for RSA Asymmetric Keys: <br/><br/> Supported padding methods: <br/> <ol> <li> PSS </li> <li> PKCS1v15 </li> </ol> <br/><br/> The supported hashing algorithms with PSS are: <br/> <ol> <li> SHA224 </li> <li> SHA256 </li> <li> SHA384 </li> <li> SHA512 </li> <li> RIPEMD160 </li> <li> SHA512_224 </li> <li> SHA512_256 </li> <li> SHA3_224 </li> <li> SHA3_256 </li> <li> SHA3_384 </li> <li> SHA3_512 </li> </ol> <br/> The supported hashing algorithms with PKCS1v15 are: <br/> <ol> <li> SHA224 </li> <li> SHA256 </li> <li> SHA384 </li> <li> SHA512 </li> <li> RIPEMD160 </li> </ol> <br/> Streaming operations are supported.|
| MAC [KMIP-SPEC 4.33][kmip-spec-4.33] | ✅ | Vault 1.13 <br/> Supported for RSA Asymmetric Keys: <br/><br/> The supported hashing algorithms are: <br/> <ol> <li> SHA224 </li> <li> SHA256 </li> <li> SHA384 </li> <li> SHA512 </li> <li> RIPEMD160 </li> <li> SHA512_224 </li> <li> SHA512_256 </li> <li> SHA3_256 </li> <li> SHA3_384 </li> <li> SHA3_512 </li> </ol> <br/> The follwing hashing algorithms are not supported: <br/> <ol> <li> MD4 </li> <li> MD5 </li> <li> SHA1 </li> </ol> <br/> Streaming operations are supported.|
| MAC Verify [KMIP-SPEC 4.34][kmip-spec-4.34] | ✅ | Vault 1.13 <br/> Supported for RSA Asymmetric Keys: <br/><br/> The supported hashing algorithms are: <br/> <ol> <li> SHA224 </li> <li> SHA256 </li> <li> SHA384 </li> <li> SHA512 </li> <li> RIPEMD160 </li> <li> SHA512_224 </li> <li> SHA512_256 </li> <li> SHA3_256 </li> <li> SHA3_384 </li> <li> SHA3_512 </li> </ol> <br/> The follwing hashing algorithms are not supported: <br/> <ol> <li> MD4 </li> <li> MD5 </li> <li> SHA1 </li> </ol> <br/> Streaming operations are supported.|
| RNG Retrieve [KMIP-SPEC 4.35][kmip-spec-4.35] | ✅ | Vault 1.13 |
| RNG Seed [KMIP-SPEC 4.36][kmip-spec-4.36] | ✅ | Vault 1.13 |
3. MAY support any clause within [KMIP-SPEC][kmip-spec] provided it does not conflict with any other clause within the section [Basic Cryptographic Server][basic-cryptographic-server]
4. MAY support extensions outside the scope of this standard (e.g., vendor extensions, conformance clauses) that do not contradict any KMIP requirements.
[kmip-spec-2.1.1]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660735
[kmip-spec-2.1.2]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660736
[kmip-spec-2.1.3]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660737
[kmip-spec-2.1.4]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660738
[kmip-spec-2.1.8]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660757
[kmip-spec-2.1.9]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660758
[kmip-spec-2.1.19]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660768
[kmip-spec-2.1.20]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660769
[kmip-spec-2.1.21]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660770
[kmip-spec-2.2.3]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660776
[kmip-spec-2.2.4]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660777
[kmip-spec-3.1]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660784
[kmip-spec-3.2]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660785
[kmip-spec-3.3]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660786
[kmip-spec-3.4]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660787
[kmip-spec-3.5]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660788
[kmip-spec-3.6]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660789
[kmip-spec-3.17]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660800
[kmip-spec-3.19]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660807
[kmip-spec-3.22]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660810
[kmip-spec-3.23]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660811
[kmip-spec-3.25]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660813
[kmip-spec-3.26]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660814
[kmip-spec-3.24]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660812
[kmip-spec-3.27]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660815
[kmip-spec-3.29]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660817
[kmip-spec-3.30]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660818
[kmip-spec-3.31]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660819
[kmip-spec-3.33]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660821
[kmip-spec-3.34]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660822
[kmip-spec-3.35]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660823
[kmip-spec-3.38]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660826
[kmip-spec-3.40]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660828
[kmip-spec-3.41]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660829
[kmip-spec-3.42]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660830
[kmip-spec-3.43]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660831
[kmip-spec-3.44]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660832
[kmip-spec-3.46]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660834
[kmip-spec-3.47]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660835
[kmip-spec-3.48]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660836
[kmip-spec-3.49]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660837
[kmip-spec-3.50]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660838
[kmip-spec-3.51]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660839
[kmip-spec-4.9]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660849
[kmip-spec-4.10]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660850
[kmip-spec-4.11]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660851
[kmip-spec-4.12]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660852
[kmip-spec-4.13]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660853
[kmip-spec-4.14]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660854
[kmip-spec-4.15]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660855
[kmip-spec-4.16]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660856
[kmip-spec-4.19]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660859
[kmip-spec-4.20]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660860
[kmip-spec-4.21]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660861
[kmip-spec-4.25]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660865
[kmip-spec-4.26]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660866
[kmip-spec-4.29]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660869
[kmip-spec-4.30]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660870
[kmip-spec-4.31]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660871
[kmip-spec-4.32]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660872
[kmip-spec-4.33]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660873
[kmip-spec-4.34]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660874
[kmip-spec-4.35]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660875
[kmip-spec-4.36]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660876
[kmip-spec-6.1]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660887
[kmip-spec-6.2]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660888
[kmip-spec-6.3]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660889
[kmip-spec-6.4]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660890
[kmip-spec-6.5]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660891
[kmip-spec-6.7]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660893
[kmip-spec-6.9]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660895
[kmip-spec-6.10]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660896
[kmip-spec-6.12]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660898
[kmip-spec-6.13]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660899
[kmip-spec-6.14]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660900
[kmip-spec-6.15]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660901
[kmip-spec-6.17]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660903
[kmip-spec-6.18]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660904
[kmip-spec-6.19]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660905
[kmip-spec-6.16]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660902
[kmip-spec-4]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660840
[kmip-spec-7]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660906
[kmip-spec-8]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660909
[kmip-spec-9.1]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660911
[kmip-spec-10]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660973
[kmip-spec-11]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660974
[kmip-spec]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html
[kmip-spec-2.2.2]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660775
[kmip-spec-9.1.3.2.3]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660923
[kmip-spec-4.1]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660841
[kmip-spec-9.1.3.2.13]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660933
[kmip-spec-9.1.3.2.12]: https://docs.oasis-open.org/kmip/spec/v1.4/errata01/os/kmip-spec-v1.4-errata01-os-redlined.html#_Toc490660932
[baseline-server]: http://docs.oasis-open.org/kmip/profiles/v1.4/os/kmip-profiles-v1.4-os.html#_Toc491431430
[lifecycle-server]: http://docs.oasis-open.org/kmip/profiles/v1.4/os/kmip-profiles-v1.4-os.html#_Toc491431487
[basic-cryptographic-server]: http://docs.oasis-open.org/kmip/profiles/v1.4/os/kmip-profiles-v1.4-os.html#_Toc491431527
[asymmetric-key-lifecycle-server]: http://docs.oasis-open.org/kmip/profiles/v1.4/os/kmip-profiles-v1.4-os.html#_Toc491431516
[advanced-cryptographic-server]: http://docs.oasis-open.org/kmip/profiles/v1.4/os/kmip-profiles-v1.4-os.html#_Toc491431528
[tc-proc-2.18]: https://www.oasis-open.org/policies-guidelines/tc-process-2017-05-26/technical-committee-tc-process-27-july-2011/#specQuality
| vault | layout docs page title KMIP Profiles Support description The KMIP profiles define the use of KMIP objects attributes operations message elements and authentication methods within specific contexts of KMIP server and client interaction These profiles define a set of normative constraints for employing KMIP within a particular environment or context of use KMIP profiles version 1 4 This document specifies conformance clauses in accordance with the OASIS TC Process TC PROC section 2 18 paragraph 8a tc proc 2 18 for the KMIP Specification KMIP SPEC 12 1 and 12 2 kmip spec for a KMIP server or KMIP client through profiles that define the use of KMIP objects attributes operations message elements and authentication methods within specific contexts of KMIP server and client interaction Vault implements version 1 4 of the following Key Management Interoperability Protocol Profiles Baseline server baseline server 1 Supports the following objects Object Supported Attribute KMIP SPEC 2 1 1 kmip spec 2 1 1 Credential KMIP SPEC 2 1 2 kmip spec 2 1 2 Key Block KMIP SPEC 2 1 3 kmip spec 2 1 3 Key Value KMIP SPEC 2 1 4 kmip spec 2 1 4 Template Attribute Structure KMIP SPEC 2 1 8 kmip spec 2 1 8 Extension Information KMIP SPEC 2 1 9 kmip spec 2 1 9 Profile Information KMIP SPEC 2 1 19 kmip spec 2 1 19 Validation Information KMIP SPEC 2 1 20 kmip spec 2 1 20 Capability Information KMIP SPEC 2 1 21 kmip spec 2 1 21 2 Supports the following subsets of attributes Attribute Supported Notes Unique Identifier KMIP SPEC 3 1 kmip spec 3 1 Name KMIP SPEC 3 2 kmip spec 3 2 Object Type KMIP SPEC 3 3 kmip spec 3 3 Cryptographic Algorithm KMIP SPEC 3 4 kmip spec 3 4 Cryptographic Length KMIP SPEC 3 5 kmip spec 3 5 Cryptographic Parameters KMIP SPEC 3 6 kmip spec 3 6 Digest KMIP SPEC 3 17 kmip spec 3 17 Cryptographic Usage Mask KMIP SPEC 3 19 kmip spec 3 19 State KMIP SPEC 3 22 kmip spec 3 22 Initial Date KMIP SPEC 3 23 kmip spec 3 23 Process Start Date KMIP SPEC 3 25 kmip spec 3 25 Vault 1 11 Protect Stop Date KMIP SPEC 3 26 kmip spec 3 26 Vault 1 11 Activation Date KMIP SPEC 3 24 kmip spec 3 24 Deactivation Date KMIP SPEC 3 27 kmip spec 3 27 Compromise Occurrence Date KMIP SPEC 3 29 kmip spec 3 29 Compromise Date KMIP SPEC 3 30 kmip spec 3 30 Revocation Reason KMIP SPEC 3 31 kmip spec 3 31 Object Group KMIP SPEC 3 33 kmip spec 3 33 Fresh KMIP SPEC 3 34 kmip spec 3 34 Link KMIP SPEC 3 35 kmip spec 3 35 Last Change Date KMIP SPEC 3 38 kmip spec 3 38 Alternative Name KMIP SPEC 3 40 kmip spec 3 40 Vault 1 12 Key Value Present KMIP SPEC 3 41 kmip spec 3 41 Vault 1 12 Key Value Location KMIP SPEC 3 42 kmip spec 3 42 Original Creation Date KMIP SPEC 3 43 kmip spec 3 43 Random Number Generator KMIP SPEC 3 44 kmip spec 3 44 Description KMIP SPEC 3 46 kmip spec 3 46 Comment KMIP SPEC 3 47 kmip spec 3 47 Sensitive KMIP SPEC 3 48 kmip spec 3 48 Always Sensitive KMIP SPEC 3 49 kmip spec 3 49 Extractable KMIP SPEC 3 50 kmip spec 3 50 Never Extractable KMIP SPEC 3 51 kmip spec 3 51 3 Supports the following client to server operations Operation Supported Notes Locate KMIP SPEC 4 9 kmip spec 4 9 Vault version 1 11 supports attributes Activation Date Application Specific Information Cryptographic Algorithm Cryptographic Length Name Object Type Original Creation Date and State br Vault version 1 12 supports all profile attributes except for Key Value Location Check KMIP SPEC 4 10 kmip spec 4 10 Get KMIP SPEC 4 11 kmip spec 4 11 Get Attributes KMIP SPEC 4 12 kmip spec 4 12 Get Attribute List KMIP SPEC 4 13 kmip spec 4 13 Add Attribute KMIP SPEC 4 14 kmip spec 4 14 Modify Attribute KMIP SPEC 4 15 kmip spec 4 15 Vault 1 12 Delete Attribute KMIP SPEC 4 16 kmip spec 4 16 Vault 1 12 Activate KMIP SPEC 4 19 kmip spec 4 19 Revoke KMIP SPEC 4 20 kmip spec 4 20 Destroy KMIP SPEC 4 21 kmip spec 4 21 Query KMIP SPEC 4 25 kmip spec 4 25 Vault 1 11 Discover Versions KMIP SPEC 4 26 kmip spec 4 26 4 Supports the following message contents Message Content Supported Protocol Version KMIP SPEC 6 1 kmip spec 6 1 Operation KMIP SPEC 6 2 kmip spec 6 2 Maximum Response Size KMIP SPEC 6 3 kmip spec 6 3 Unique Batch Item ID KMIP SPEC 6 4 kmip spec 6 4 Time Stamp KMIP SPEC 6 5 kmip spec 6 5 Asynchronous Indicator KMIP SPEC 6 7 kmip spec 6 7 Result Status KMIP SPEC 6 9 kmip spec 6 9 Result Reason KMIP SPEC 6 10 kmip spec 6 10 Batch Order Option KMIP SPEC 6 12 kmip spec 6 12 Batch Error Continuation Option KMIP SPEC 6 13 kmip spec 6 13 Batch Count KMIP SPEC 6 14 kmip spec 6 14 Batch Item KMIP SPEC 6 15 kmip spec 6 15 Attestation Capable Indicator KMIP SPEC 6 17 kmip spec 6 17 Client Correlation Value KMIP SPEC 6 18 kmip spec 6 18 Server Correlation Value KMIP SPEC 6 19 kmip spec 6 19 Message Extension KMIP SPEC 6 16 kmip spec 6 16 5 Supports the ID Placeholder KMIP SPEC 4 kmip spec 4 6 Supports Message Format KMIP SPEC 7 kmip spec 7 7 Supports Authentication KMIP SPEC 8 kmip spec 8 8 Supports the TTLV encoding KMIP SPEC 9 1 kmip spec 9 1 9 Supports the transport requirements KMIP SPEC 10 kmip spec 10 10 Supports Error Handling KMIP SPEC 11 kmip spec 11 for any supported object attribute or operation 11 Optionally supports any clause within KMIP SPEC kmip spec that is not listed above 12 Optionally supports extensions outside the scope of this standard e g vendor extensions conformance clauses that do not contradict any KMIP requirements We do not have any extensions Symmetric key lifecycle server lifecycle server 1 SHALL conform to the Baseline Server baseline server 2 Supports the following objects Object Supported Symmetric Key KMIP SPEC 2 2 2 kmip spec 2 2 2 Key Format Type KMIP SPEC 9 1 3 2 3 kmip spec 9 1 3 2 3 3 Supports the following subsets of attributes Attribute Supported Notes Cryptographic Algorithm KMIP SPEC 3 4 kmip spec 3 4 Object Type KMIP SPEC 3 3 kmip spec 3 3 Process Start Date KMIP SPEC 3 25 kmip spec 3 25 Vault 1 11 Protect Stop Date KMIP SPEC 3 26 kmip spec 3 26 Vault 1 11 4 Supports the following client to server operations Operation Supported Create KMIP SPEC 4 1 kmip spec 4 1 5 Supports the following message encoding Message Encoding Supported Notes Cryptographic Algorithm KMIP SPEC 9 1 3 2 13 kmip spec 9 1 3 2 13 with values i 3DES Vault 1 12 ii AES Object Type KMIP SPEC 9 1 3 2 12 kmip spec 9 1 3 2 12 with value i Symmetric Key Key Format Type KMIP SPEC 9 1 3 2 3 kmip spec 9 1 3 2 3 with value i Raw ii Transparent Symmetric Key 6 MAY support any clause within KMIP SPEC kmip spec provided it does not conflict with any other clause within the section Symmetric Key Lifecycle Server lifecycle server 7 MAY support extensions outside the scope of this standard e g vendor extensions conformance clauses that do not contradict any KMIP requirements Basic cryptographic server basic cryptographic server 1 SHALL conform to the Baseline Server baseline server 2 Supports the following client to server operations Operation Supported Notes Encrypt KMIP SPEC 4 29 kmip spec 4 29 Vault 1 11 br Supported for AES unsupported for 3DES br br Supported Block Cipher Modes br ol li GCM li li CBC li li CFB li li CTR li li ECB li li OFB li ol br Stream operations are supported except for GCM block cipher mode br br Supported padding methods br ol li None li li PKCS5 li ol Decypt KMIP SPEC 4 30 kmip spec 4 30 Vault 1 11 br Supported for AES unsupported for 3DES br br Supported Block Cipher Modes br ol li GCM li li CBC li li CFB li li CTR li li ECB li li OFB li ol br Stream operations are supported except for GCM block cipher mode br br Supported padding methods br ol li None li li PKCS5 li ol 3 MAY support any clause within KMIP SPEC kmip spec provided it does not conflict with any other clause within the section Basic Cryptographic Server basic cryptographic server 4 MAY support extensions outside the scope of this standard e g vendor extensions conformance clauses that do not contradict any KMIP requirements Asymmetric key lifecycle server asymmetric key lifecycle server 1 SHALL conform to the Baseline Server baseline server 2 Supports the following objects Object Supported Symmetric Key KMIP SPEC 2 2 2 kmip spec 2 2 2 Key Format Type KMIP SPEC 9 1 3 2 3 kmip spec 9 1 3 2 3 3 Supports the following objects Object Supported Notes Public Key KMIP SPEC 2 2 3 kmip spec 2 2 3 Vault 1 13 Private Key KMIP SPEC 2 2 4 kmip spec 2 2 4 Vault 1 13 Process Start Date KMIP SPEC 3 25 kmip spec 3 25 Vault 1 11 Key Format Type KMIP SPEC 9 1 3 2 3 kmip spec 9 1 3 2 3 4 Supports the following attributes Attribute Supported Notes Cryptographic Algorithm KMIP SPEC 3 4 kmip spec 3 4 Object Type KMIP SPEC 3 3 kmip spec 3 3 Process Start Date KMIP SPEC 3 25 kmip spec 3 25 Vault 1 11 Protect Stop Date KMIP SPEC 3 26 kmip spec 3 26 Vault 1 11 5 Supports the following message encoding Message Encoding Supported Notes Cryptographic Algorithm KMIP SPEC 9 1 3 2 13 kmip spec 9 1 3 2 13 with values i RSA Vault 1 13 Object Type KMIP SPEC 9 1 3 2 12 kmip spec 9 1 3 2 12 with value i Public Key Vault 1 13 ii Private Key Vault 1 13 Key Format Type KMIP SPEC 9 1 3 2 3 kmip spec 9 1 3 2 3 with value i PKCS 1 Vault 1 13 br Supported for Private and Public Keys ii PKCS 8 Vault 1 13 br Supported for Private Key iii Transparent RSA Public Key Vault 1 13 iv Transparent RSA Private Key Vault 1 13 v X 509 Vault 1 13 br Supported for Public Key 6 MAY support any clause within KMIP SPEC kmip spec provided it does not conflict with any other clause within the section Symmetric Key Lifecycle Server lifecycle server 7 MAY support extensions outside the scope of this standard e g vendor extensions conformance clauses that do not contradict any KMIP requirements Advanced cryptographic server advanced cryptographic server 1 SHALL conform to the Baseline Server baseline server 2 Supports the following client to server operations Operation Supported Notes Encrypt KMIP SPEC 4 29 kmip spec 4 29 Vault 1 11 br See Basic Cryptographic Server basic cryptographic server br br Vault 1 13 br Supported for RSA Asymmetric Keys br br Supported padding methods br ol li OAEP li li PKCS1v15 li ol br Streaming operations are not supported Decypt KMIP SPEC 4 30 kmip spec 4 30 Vault 1 11 br See Basic Cryptographic Server basic cryptographic server br br Vault 1 13 br Supported for RSA Asymmetric Keys br br Supported padding methods br ol li OAEP li li PKCS1v15 li ol br Streaming operations are not supported Sign KMIP SPEC 4 31 kmip spec 4 31 Vault 1 13 br Supported for RSA Asymmetric Keys br br Supported padding methods br ol li PSS li li PKCS1v15 li ol br br The supported hashing algorithms with PSS are br ol li SHA224 li li SHA256 li li SHA384 li li SHA512 li li RIPEMD160 li li SHA512 224 li li SHA512 256 li li SHA3 224 li li SHA3 256 li li SHA3 384 li li SHA3 512 li ol br The supported hashing algorithms with PKCS1v15 are br ol li SHA224 li li SHA256 li li SHA384 li li SHA512 li li RIPEMD160 li ol br Streaming operations are supported Signature Verify KMIP SPEC 4 32 kmip spec 4 32 Vault 1 13 br Supported for RSA Asymmetric Keys br br Supported padding methods br ol li PSS li li PKCS1v15 li ol br br The supported hashing algorithms with PSS are br ol li SHA224 li li SHA256 li li SHA384 li li SHA512 li li RIPEMD160 li li SHA512 224 li li SHA512 256 li li SHA3 224 li li SHA3 256 li li SHA3 384 li li SHA3 512 li ol br The supported hashing algorithms with PKCS1v15 are br ol li SHA224 li li SHA256 li li SHA384 li li SHA512 li li RIPEMD160 li ol br Streaming operations are supported MAC KMIP SPEC 4 33 kmip spec 4 33 Vault 1 13 br Supported for RSA Asymmetric Keys br br The supported hashing algorithms are br ol li SHA224 li li SHA256 li li SHA384 li li SHA512 li li RIPEMD160 li li SHA512 224 li li SHA512 256 li li SHA3 256 li li SHA3 384 li li SHA3 512 li ol br The follwing hashing algorithms are not supported br ol li MD4 li li MD5 li li SHA1 li ol br Streaming operations are supported MAC Verify KMIP SPEC 4 34 kmip spec 4 34 Vault 1 13 br Supported for RSA Asymmetric Keys br br The supported hashing algorithms are br ol li SHA224 li li SHA256 li li SHA384 li li SHA512 li li RIPEMD160 li li SHA512 224 li li SHA512 256 li li SHA3 256 li li SHA3 384 li li SHA3 512 li ol br The follwing hashing algorithms are not supported br ol li MD4 li li MD5 li li SHA1 li ol br Streaming operations are supported RNG Retrieve KMIP SPEC 4 35 kmip spec 4 35 Vault 1 13 RNG Seed KMIP SPEC 4 36 kmip spec 4 36 Vault 1 13 3 MAY support any clause within KMIP SPEC kmip spec provided it does not conflict with any other clause within the section Basic Cryptographic Server basic cryptographic server 4 MAY support extensions outside the scope of this standard e g vendor extensions conformance clauses that do not contradict any KMIP requirements kmip spec 2 1 1 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660735 kmip spec 2 1 2 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660736 kmip spec 2 1 3 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660737 kmip spec 2 1 4 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660738 kmip spec 2 1 8 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660757 kmip spec 2 1 9 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660758 kmip spec 2 1 19 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660768 kmip spec 2 1 20 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660769 kmip spec 2 1 21 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660770 kmip spec 2 2 3 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660776 kmip spec 2 2 4 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660777 kmip spec 3 1 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660784 kmip spec 3 2 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660785 kmip spec 3 3 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660786 kmip spec 3 4 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660787 kmip spec 3 5 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660788 kmip spec 3 6 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660789 kmip spec 3 17 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660800 kmip spec 3 19 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660807 kmip spec 3 22 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660810 kmip spec 3 23 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660811 kmip spec 3 25 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660813 kmip spec 3 26 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660814 kmip spec 3 24 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660812 kmip spec 3 27 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660815 kmip spec 3 29 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660817 kmip spec 3 30 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660818 kmip spec 3 31 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660819 kmip spec 3 33 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660821 kmip spec 3 34 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660822 kmip spec 3 35 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660823 kmip spec 3 38 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660826 kmip spec 3 40 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660828 kmip spec 3 41 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660829 kmip spec 3 42 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660830 kmip spec 3 43 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660831 kmip spec 3 44 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660832 kmip spec 3 46 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660834 kmip spec 3 47 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660835 kmip spec 3 48 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660836 kmip spec 3 49 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660837 kmip spec 3 50 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660838 kmip spec 3 51 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660839 kmip spec 4 9 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660849 kmip spec 4 10 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660850 kmip spec 4 11 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660851 kmip spec 4 12 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660852 kmip spec 4 13 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660853 kmip spec 4 14 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660854 kmip spec 4 15 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660855 kmip spec 4 16 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660856 kmip spec 4 19 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660859 kmip spec 4 20 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660860 kmip spec 4 21 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660861 kmip spec 4 25 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660865 kmip spec 4 26 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660866 kmip spec 4 29 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660869 kmip spec 4 30 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660870 kmip spec 4 31 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660871 kmip spec 4 32 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660872 kmip spec 4 33 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660873 kmip spec 4 34 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660874 kmip spec 4 35 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660875 kmip spec 4 36 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660876 kmip spec 6 1 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660887 kmip spec 6 2 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660888 kmip spec 6 3 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660889 kmip spec 6 4 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660890 kmip spec 6 5 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660891 kmip spec 6 7 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660893 kmip spec 6 9 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660895 kmip spec 6 10 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660896 kmip spec 6 12 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660898 kmip spec 6 13 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660899 kmip spec 6 14 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660900 kmip spec 6 15 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660901 kmip spec 6 17 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660903 kmip spec 6 18 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660904 kmip spec 6 19 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660905 kmip spec 6 16 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660902 kmip spec 4 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660840 kmip spec 7 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660906 kmip spec 8 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660909 kmip spec 9 1 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660911 kmip spec 10 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660973 kmip spec 11 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660974 kmip spec https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html kmip spec 2 2 2 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660775 kmip spec 9 1 3 2 3 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660923 kmip spec 4 1 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660841 kmip spec 9 1 3 2 13 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660933 kmip spec 9 1 3 2 12 https docs oasis open org kmip spec v1 4 errata01 os kmip spec v1 4 errata01 os redlined html Toc490660932 baseline server http docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html Toc491431430 lifecycle server http docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html Toc491431487 basic cryptographic server http docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html Toc491431527 asymmetric key lifecycle server http docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html Toc491431516 advanced cryptographic server http docs oasis open org kmip profiles v1 4 os kmip profiles v1 4 os html Toc491431528 tc proc 2 18 https www oasis open org policies guidelines tc process 2017 05 26 technical committee tc process 27 july 2011 specQuality |
vault Kubernetes secrets engine layout docs tokens service accounts role bindings and roles dynamically The Kubernetes secrets engine for Vault generates Kubernetes service account page title Kubernetes Secrets Engines | ---
layout: docs
page_title: Kubernetes - Secrets Engines
description: >-
The Kubernetes secrets engine for Vault generates Kubernetes service account
tokens, service accounts, role bindings, and roles dynamically.
---
# Kubernetes secrets engine
@include 'x509-sha1-deprecation.mdx'
The Kubernetes Secrets Engine for Vault generates Kubernetes service account tokens, and
optionally service accounts, role bindings, and roles. The created service account tokens have
a configurable TTL and any objects created are automatically deleted when the Vault lease expires.
For each lease, Vault will create a service account token attached to the
defined service account. The service account token is returned to the caller.
To learn more about service accounts in Kubernetes, visit the
[Kubernetes service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
and [Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
documentation.
~> **Note:** We do not recommend using tokens created by the Kubernetes Secrets Engine to
authenticate with the [Vault Kubernetes Auth Method](/vault/docs/auth/kubernetes). This will
generate many unique identities in Vault that will be hard to manage.
## Setup
The Kubernetes Secrets Engine must be configured in advance before it
can perform its functions. These steps are usually completed by an operator or configuration
management tool.
1. By default, Vault will connect to Kubernetes using its own service account.
If using the [standard Helm chart](https://github.com/hashicorp/vault-helm), this service account
is created automatically by default and named after the Helm release (often `vault`, but this can be
configured via the Helm value `server.serviceAccount.name`).
It's necessary to ensure that the service account Vault uses will have permissions to manage
service account tokens, and optionally manage service accounts, roles, and role bindings. These
permissions can be managed using a Kubernetes role or cluster role. The role is attached to the
Vault service account with a role binding or cluster role binding.
For example, a minimal cluster role to create service account tokens is:
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8s-minimal-secrets-abilities
rules:
- apiGroups: [""]
resources: ["serviceaccounts/token"]
verbs: ["create"]
```
Similarly, you can create a more permissive cluster role with full permissions to manage tokens,
service accounts, bindings, and roles.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8s-full-secrets-abilities
rules:
- apiGroups: [""]
resources: ["serviceaccounts", "serviceaccounts/token"]
verbs: ["create", "update", "delete"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["rolebindings", "clusterrolebindings"]
verbs: ["create", "update", "delete"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "clusterroles"]
verbs: ["bind", "escalate", "create", "update", "delete"]
```
Create this role in Kubernetes (e.g., with `kubectl apply -f`).
Moreover, if you want to use label selection to configure the namespaces on which a role can act,
you will need to grant Vault permissions to read namespaces.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8s-full-secrets-abilities-with-labels
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get"]
- apiGroups: [""]
resources: ["serviceaccounts", "serviceaccounts/token"]
verbs: ["create", "update", "delete"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["rolebindings", "clusterrolebindings"]
verbs: ["create", "update", "delete"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "clusterroles"]
verbs: ["bind", "escalate", "create", "update", "delete"]
```
~> **Note:** Getting the right permissions for Vault will require some trial and error most
likely since Kubernetes has strict protections against privilege escalation. You can read more
in the
[Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping).
~> **Note:** Protect the Vault service account, especially if you use broader permissions for it,
as it is essentially a cluster administrator account.
1. Create a role binding to bind the role to Vault's service account and grant Vault permission
to manage tokens.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: vault-token-creator-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: k8s-minimal-secrets-abilities
subjects:
- kind: ServiceAccount
name: vault
namespace: vault
```
For more information on Kubernetes roles, service accounts, bindings, and tokens, visit the
[Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/).
1. If Vault will not be automatically managing roles or service accounts (see
[Automatically Managing Roles and Service Accounts](#automatically-managing-roles-and-service-accounts)),
then you will need to set up a service account that Vault will issue tokens for.
~> **Note**: It is highly recommended that the service account that Vault issues tokens for
is **NOT** the same service account that Vault itself uses.
The examples we will use will under the namespace `test`, which you can create if it does not
already exist.
```shell-session
$ kubectl create namespace test
namespace/test created
```
Here is a simple set up of a service account, role, and role binding in the Kubernetes `test`
namespace with basic permissions we will use for this document:
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-service-account-with-generated-token
namespace: test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: test-role-list-pods
namespace: test
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-role-abilities
namespace: test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: test-role-list-pods
subjects:
- kind: ServiceAccount
name: test-service-account-with-generated-token
namespace: test
```
You can create these objects with `kubectl apply -f`.
1. Enable the Kubernetes Secrets Engine:
```shell-session
$ vault secrets enable kubernetes
Success! Enabled the kubernetes Secrets Engine at: kubernetes/
```
By default, the secrets engine will mount at the same name as the engine, i.e.,
`kubernetes/` here. This can be changed by passing the `-path` argument when enabling.
1. Configure the mount point. An empty config is allowed.
```shell-session
$ vault write -f kubernetes/config
```
Configuration options are available as specified in the
[API docs](/vault/api-docs/secret/kubernetes).
1. You can now configure Kubernetes Secrets Engine to create a Vault role (**not** the same as a
Kubernetes role) that can generate service account tokens for the given service account:
```shell-session
$ vault write kubernetes/roles/my-role \
allowed_kubernetes_namespaces="*" \
service_account_name="test-service-account-with-generated-token" \
token_default_ttl="10m"
```
## Generating credentials
After a user has authenticated to Vault and has sufficient permissions, a write to the
`creds` endpoint for the Vault role will generate and return a new service account token.
```shell-session
$ vault write kubernetes/creds/my-role \
kubernetes_namespace=test
Key Value
–-- -----
lease_id kubernetes/creds/my-role/31d771a6-...
lease_duration 10m0s
lease_renwable false
service_account_name test-service-account-with-generated-token
service_account_namespace test
service_account_token eyJHbGci0iJSUzI1NiIsImtpZCI6ImlrUEE...
```
You can use the service account token above (`eyJHbG...`) with any Kubernetes API request that
its service account is authorized for (through role bindings).
```shell-session
$ curl -sk $(kubectl config view --minify -o 'jsonpath={.clusters[].cluster.server}')/api/v1/namespaces/test/pods \
--header "Authorization: Bearer eyJHbGci0iJSUzI1Ni..."
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "1624"
},
"items": []
}
```
When the lease expires, you can verify that the token has been revoked.
```shell-session
$ curl -sk $(kubectl config view --minify -o 'jsonpath={.clusters[].cluster.server}')/api/v1/namespaces/test/pods \
--header "Authorization: Bearer eyJHbGci0iJSUzI1Ni..."
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
```
## TTL
Kubernetes service account tokens have a time-to-live (TTL). When a token expires it is
automatically revoked.
You can set a default (`token_default_ttl`) and a maximum TTL (`token_max_ttl`) when
creating or tuning the Vault role.
```shell-session
$ vault write kubernetes/roles/my-role \
allowed_kubernetes_namespaces="*" \
service_account_name="new-service-account-with-generated-token" \
token_default_ttl="10m" \
token_max_ttl="2h"
```
You can also set a TTL (`ttl`) when you generate the token from the credentials endpoint.
The TTL of the token will be given the default if not specified (and cannot exceed the
maximum TTL of the role, if present).
```shell-session
$ vault write kubernetes/creds/my-role \
kubernetes_namespace=test \
ttl=20m
Key Value
–-- -----
lease_id kubernetes/creds/my-role/31d771a6-...
lease_duration 20m0s
lease_renwable false
service_account_name new-service-account-with-generated-token
service_account_namespace test
service_account_token eyJHbGci0iJSUzI1NiIsImtpZCI6ImlrUEE...
```
You can verify the token's TTL by decoding the JWT token and extracting the `iat`
(issued at) and `exp` (expiration time) claims.
```shell-session
$ echo 'eyJhbGc...' | cut -d'.' -f2 | base64 -d | jq -r '.iat,.exp|todate'
2022-05-20T17:14:50Z
2022-05-20T17:34:50Z
```
## Audiences
Kubernetes service account tokens have audiences.
You can set default audiences (`token_default_audiences`) when creating or tuning the Vault role.
The Kubernetes cluster default audiences for service account tokens will be used if not specified.
```shell-session
$ vault write kubernetes/roles/my-role \
allowed_kubernetes_namespaces="*" \
service_account_name="new-service-account-with-generated-token" \
token_default_audiences="custom-audience"
```
You can also set audiences (`audiences`) when you generate the token from the credentials endpoint.
The audiences of the token will be given the default audiences if not specified.
```shell-session
$ vault write kubernetes/creds/my-role \
kubernetes_namespace=test \
audiences="another-custom-audience"
Key Value
–-- -----
lease_id kubernetes/creds/my-role/SriWQf0bPZ...
lease_duration 768h
lease_renwable false
service_account_name new-service-account-with-generated-token
service_account_namespace test
service_account_token eyJHbGci0iJSUzI1NiIsImtpZCI6ImlrUEE...
```
You can verify the token's audiences by decoding the JWT.
```shell-session
$ echo 'eyJhbGc...' | cut -d'.' -f2 | base64 -d
{"aud":["another-custom-audience"]...
```
## Automatically managing roles and service accounts
When configuring the Vault role, you can pass in parameters to specify that you want to
automatically generate the Kubernetes service account and role binding,
and optionally generate the Kubernetes role itself.
If you want to configure the Vault role to use a pre-existing Kubernetes role, but generate
the service account and role binding automatically, you can set the `kubernetes_role_name`
parameter.
```shell-session
$ vault write kubernetes/roles/auto-managed-sa-role \
allowed_kubernetes_namespaces="test" \
kubernetes_role_name="test-role-list-pods"
```
~> **Note**: Vault's service account will also need access to the resources it is granting
access to. This can be done for the examples above with `kubectl -n test create rolebinding --role test-role-list-pods --serviceaccount=vault:vault vault-test-role-abilities`.
This is how Kubernetes prevents privilege escalation.
You can read more in the
[Kubernetes RBAC documentation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping).
You can then get credentials with the automatically generated service account.
```shell-session
$ vault write kubernetes/creds/auto-managed-sa-role \
kubernetes_namespace=test
Key Value
--- -----
lease_id kubernetes/creds/auto-managed-sa-role/cujRLYjKZUMQk6dkHBGGWm67
lease_duration 768h
lease_renewable false
service_account_name v-token-auto-man-1653001548-5z6hrgsxnmzncxejztml4arz
service_account_namespace test
service_account_token eyJHbGci0iJSUzI1Ni...
```
Furthermore, Vault can also automatically create the role in addition to the service account and
role binding by specifying the `generated_role_rules` parameter, which accepts a set of JSON or YAML
rules for the generated role.
```shell-session
$ vault write kubernetes/roles/auto-managed-sa-and-role \
allowed_kubernetes_namespaces="test" \
generated_role_rules='{"rules":[{"apiGroups":[""],"resources":["pods"],"verbs":["list"]}]}'
```
You can then get credentials in the same way as before.
```shell-session
$ vault write kubernetes/creds/auto-managed-sa-and-role \
kubernetes_namespace=test
Key Value
--- -----
lease_id kubernetes/creds/auto-managed-sa-and-role/pehLtegoTP8vCkcaQozUqOHf
lease_duration 768h
lease_renewable false
service_account_name v-token-auto-man-1653002096-4imxf3ytjh5hbyro9s1oqdo3
service_account_namespace test
service_account_token eyJHbGci0iJSUzI1Ni...
```
## API
The Kubernetes Secrets Engine has a full HTTP API. Please see the
[Kubernetes Secrets Engine API docs](/vault/api-docs/secret/kubernetes) for more details. | vault | layout docs page title Kubernetes Secrets Engines description The Kubernetes secrets engine for Vault generates Kubernetes service account tokens service accounts role bindings and roles dynamically Kubernetes secrets engine include x509 sha1 deprecation mdx The Kubernetes Secrets Engine for Vault generates Kubernetes service account tokens and optionally service accounts role bindings and roles The created service account tokens have a configurable TTL and any objects created are automatically deleted when the Vault lease expires For each lease Vault will create a service account token attached to the defined service account The service account token is returned to the caller To learn more about service accounts in Kubernetes visit the Kubernetes service account https kubernetes io docs tasks configure pod container configure service account and Kubernetes RBAC https kubernetes io docs reference access authn authz rbac documentation Note We do not recommend using tokens created by the Kubernetes Secrets Engine to authenticate with the Vault Kubernetes Auth Method vault docs auth kubernetes This will generate many unique identities in Vault that will be hard to manage Setup The Kubernetes Secrets Engine must be configured in advance before it can perform its functions These steps are usually completed by an operator or configuration management tool 1 By default Vault will connect to Kubernetes using its own service account If using the standard Helm chart https github com hashicorp vault helm this service account is created automatically by default and named after the Helm release often vault but this can be configured via the Helm value server serviceAccount name It s necessary to ensure that the service account Vault uses will have permissions to manage service account tokens and optionally manage service accounts roles and role bindings These permissions can be managed using a Kubernetes role or cluster role The role is attached to the Vault service account with a role binding or cluster role binding For example a minimal cluster role to create service account tokens is yaml apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name k8s minimal secrets abilities rules apiGroups resources serviceaccounts token verbs create Similarly you can create a more permissive cluster role with full permissions to manage tokens service accounts bindings and roles yaml apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name k8s full secrets abilities rules apiGroups resources serviceaccounts serviceaccounts token verbs create update delete apiGroups rbac authorization k8s io resources rolebindings clusterrolebindings verbs create update delete apiGroups rbac authorization k8s io resources roles clusterroles verbs bind escalate create update delete Create this role in Kubernetes e g with kubectl apply f Moreover if you want to use label selection to configure the namespaces on which a role can act you will need to grant Vault permissions to read namespaces yaml apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name k8s full secrets abilities with labels rules apiGroups resources namespaces verbs get apiGroups resources serviceaccounts serviceaccounts token verbs create update delete apiGroups rbac authorization k8s io resources rolebindings clusterrolebindings verbs create update delete apiGroups rbac authorization k8s io resources roles clusterroles verbs bind escalate create update delete Note Getting the right permissions for Vault will require some trial and error most likely since Kubernetes has strict protections against privilege escalation You can read more in the Kubernetes RBAC documentation https kubernetes io docs reference access authn authz rbac privilege escalation prevention and bootstrapping Note Protect the Vault service account especially if you use broader permissions for it as it is essentially a cluster administrator account 1 Create a role binding to bind the role to Vault s service account and grant Vault permission to manage tokens yaml apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name vault token creator binding roleRef apiGroup rbac authorization k8s io kind ClusterRole name k8s minimal secrets abilities subjects kind ServiceAccount name vault namespace vault For more information on Kubernetes roles service accounts bindings and tokens visit the Kubernetes RBAC documentation https kubernetes io docs reference access authn authz rbac 1 If Vault will not be automatically managing roles or service accounts see Automatically Managing Roles and Service Accounts automatically managing roles and service accounts then you will need to set up a service account that Vault will issue tokens for Note It is highly recommended that the service account that Vault issues tokens for is NOT the same service account that Vault itself uses The examples we will use will under the namespace test which you can create if it does not already exist shell session kubectl create namespace test namespace test created Here is a simple set up of a service account role and role binding in the Kubernetes test namespace with basic permissions we will use for this document yaml apiVersion v1 kind ServiceAccount metadata name test service account with generated token namespace test apiVersion rbac authorization k8s io v1 kind Role metadata name test role list pods namespace test rules apiGroups resources pods verbs list apiVersion rbac authorization k8s io v1 kind RoleBinding metadata name test role abilities namespace test roleRef apiGroup rbac authorization k8s io kind Role name test role list pods subjects kind ServiceAccount name test service account with generated token namespace test You can create these objects with kubectl apply f 1 Enable the Kubernetes Secrets Engine shell session vault secrets enable kubernetes Success Enabled the kubernetes Secrets Engine at kubernetes By default the secrets engine will mount at the same name as the engine i e kubernetes here This can be changed by passing the path argument when enabling 1 Configure the mount point An empty config is allowed shell session vault write f kubernetes config Configuration options are available as specified in the API docs vault api docs secret kubernetes 1 You can now configure Kubernetes Secrets Engine to create a Vault role not the same as a Kubernetes role that can generate service account tokens for the given service account shell session vault write kubernetes roles my role allowed kubernetes namespaces service account name test service account with generated token token default ttl 10m Generating credentials After a user has authenticated to Vault and has sufficient permissions a write to the creds endpoint for the Vault role will generate and return a new service account token shell session vault write kubernetes creds my role kubernetes namespace test Key Value lease id kubernetes creds my role 31d771a6 lease duration 10m0s lease renwable false service account name test service account with generated token service account namespace test service account token eyJHbGci0iJSUzI1NiIsImtpZCI6ImlrUEE You can use the service account token above eyJHbG with any Kubernetes API request that its service account is authorized for through role bindings shell session curl sk kubectl config view minify o jsonpath clusters cluster server api v1 namespaces test pods header Authorization Bearer eyJHbGci0iJSUzI1Ni kind PodList apiVersion v1 metadata resourceVersion 1624 items When the lease expires you can verify that the token has been revoked shell session curl sk kubectl config view minify o jsonpath clusters cluster server api v1 namespaces test pods header Authorization Bearer eyJHbGci0iJSUzI1Ni kind Status apiVersion v1 metadata status Failure message Unauthorized reason Unauthorized code 401 TTL Kubernetes service account tokens have a time to live TTL When a token expires it is automatically revoked You can set a default token default ttl and a maximum TTL token max ttl when creating or tuning the Vault role shell session vault write kubernetes roles my role allowed kubernetes namespaces service account name new service account with generated token token default ttl 10m token max ttl 2h You can also set a TTL ttl when you generate the token from the credentials endpoint The TTL of the token will be given the default if not specified and cannot exceed the maximum TTL of the role if present shell session vault write kubernetes creds my role kubernetes namespace test ttl 20m Key Value lease id kubernetes creds my role 31d771a6 lease duration 20m0s lease renwable false service account name new service account with generated token service account namespace test service account token eyJHbGci0iJSUzI1NiIsImtpZCI6ImlrUEE You can verify the token s TTL by decoding the JWT token and extracting the iat issued at and exp expiration time claims shell session echo eyJhbGc cut d f2 base64 d jq r iat exp todate 2022 05 20T17 14 50Z 2022 05 20T17 34 50Z Audiences Kubernetes service account tokens have audiences You can set default audiences token default audiences when creating or tuning the Vault role The Kubernetes cluster default audiences for service account tokens will be used if not specified shell session vault write kubernetes roles my role allowed kubernetes namespaces service account name new service account with generated token token default audiences custom audience You can also set audiences audiences when you generate the token from the credentials endpoint The audiences of the token will be given the default audiences if not specified shell session vault write kubernetes creds my role kubernetes namespace test audiences another custom audience Key Value lease id kubernetes creds my role SriWQf0bPZ lease duration 768h lease renwable false service account name new service account with generated token service account namespace test service account token eyJHbGci0iJSUzI1NiIsImtpZCI6ImlrUEE You can verify the token s audiences by decoding the JWT shell session echo eyJhbGc cut d f2 base64 d aud another custom audience Automatically managing roles and service accounts When configuring the Vault role you can pass in parameters to specify that you want to automatically generate the Kubernetes service account and role binding and optionally generate the Kubernetes role itself If you want to configure the Vault role to use a pre existing Kubernetes role but generate the service account and role binding automatically you can set the kubernetes role name parameter shell session vault write kubernetes roles auto managed sa role allowed kubernetes namespaces test kubernetes role name test role list pods Note Vault s service account will also need access to the resources it is granting access to This can be done for the examples above with kubectl n test create rolebinding role test role list pods serviceaccount vault vault vault test role abilities This is how Kubernetes prevents privilege escalation You can read more in the Kubernetes RBAC documentation https kubernetes io docs reference access authn authz rbac privilege escalation prevention and bootstrapping You can then get credentials with the automatically generated service account shell session vault write kubernetes creds auto managed sa role kubernetes namespace test Key Value lease id kubernetes creds auto managed sa role cujRLYjKZUMQk6dkHBGGWm67 lease duration 768h lease renewable false service account name v token auto man 1653001548 5z6hrgsxnmzncxejztml4arz service account namespace test service account token eyJHbGci0iJSUzI1Ni Furthermore Vault can also automatically create the role in addition to the service account and role binding by specifying the generated role rules parameter which accepts a set of JSON or YAML rules for the generated role shell session vault write kubernetes roles auto managed sa and role allowed kubernetes namespaces test generated role rules rules apiGroups resources pods verbs list You can then get credentials in the same way as before shell session vault write kubernetes creds auto managed sa and role kubernetes namespace test Key Value lease id kubernetes creds auto managed sa and role pehLtegoTP8vCkcaQozUqOHf lease duration 768h lease renewable false service account name v token auto man 1653002096 4imxf3ytjh5hbyro9s1oqdo3 service account namespace test service account token eyJHbGci0iJSUzI1Ni API The Kubernetes Secrets Engine has a full HTTP API Please see the Kubernetes Secrets Engine API docs vault api docs secret kubernetes for more details |
vault credentials dynamically based on RAM policies or roles page title AliCloud Secrets Engines layout docs The AliCloud secrets engine for Vault generates access tokens or STS | ---
layout: docs
page_title: AliCloud - Secrets Engines
description: >-
The AliCloud secrets engine for Vault generates access tokens or STS
credentials
dynamically based on RAM policies or roles.
---
# AliCloud secrets engine
The AliCloud secrets engine dynamically generates AliCloud access tokens based on RAM
policies, or AliCloud STS credentials based on RAM roles. This generally
makes working with AliCloud easier, since it does not involve clicking in the web UI.
The AliCloud access tokens are time-based and are automatically revoked when the Vault
lease expires. STS credentials are short-lived, non-renewable, and expire on their own.
## Setup
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. Enable the AliCloud secrets engine:
```text
$ vault secrets enable alicloud
Success! Enabled the alicloud secrets engine at: alicloud/
```
By default, the secrets engine will mount at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. [Create a custom policy](https://www.alibabacloud.com/help/doc-detail/28640.htm)
in AliCloud that will be used for the access key you will give Vault. See "Example
RAM Policy for Vault".
1. [Create a user](https://www.alibabacloud.com/help/faq-detail/28637.htm) in AliCloud
with a name like "hashicorp-vault", and directly apply the new custom policy to that user
in the "User Authorization Policies" section.
1. Create an access key for that user in AliCloud, which is an action available in
AliCloud's UI on the user's page.
1. Configure that access key as the credentials that Vault will use to communicate with
AliCloud to generate credentials:
```text
$ vault write alicloud/config \
access_key=0wNEpMMlzy7szvai \
secret_key=PupkTg8jdmau1cXxYacgE736PJj4cA
```
Alternatively, the AliCloud secrets engine can pick up credentials set as environment variables,
or credentials available through instance metadata. Since it checks current credentials on every API call,
changes in credentials will be picked up almost immediately without a Vault restart.
If available, we recommend using instance metadata for these credentials as they are the most
secure option. To do so, simply ensure that the instance upon which Vault is running has sufficient
privileges, and do not add any config.
1. Configure a role describing how credentials will be granted.
To generate access tokens using only policies that have already been created in AliCloud:
```text
$ vault write alicloud/role/policy-based \
remote_policies='name:AliyunOSSReadOnlyAccess,type:System' \
remote_policies='name:AliyunRDSReadOnlyAccess,type:System'
```
To generate access tokens using only policies that will be dynamically created in AliCloud by
Vault:
```text
$ vault write alicloud/role/policy-based \
inline_policies=-<<EOF
[
{
"Statement": [
{
"Action": "rds:Describe*",
"Effect": "Allow",
"Resource": "*"
}
],
"Version": "1"
},
{...}
]
EOF
```
Both `inline_policies` and `remote_policies` may be used together. However, neither may be
used configuring how to generate STS credentials, like so:
```text
$ vault write alibaba/role/role-based \
role_arn='acs:ram::5138828231865461:role/hastrustedactors'
```
Any `role_arn` specified must have added "trusted actors" when it was being created. These
can only be added at role creation time. Trusted actors are entities that can assume the role.
Since we will be assuming the role to gain credentials, the `access_key` and `secret_key` in
the config must qualify as a trusted actor.
### Helpful links
- [More on roles](https://www.alibabacloud.com/help/doc-detail/28649.htm)
- [More on policies](https://www.alibabacloud.com/help/doc-detail/28652.htm)
### Example RAM policy for Vault
While AliCloud credentials can be supplied by environment variables, an explicit
setting in the `alicloud/config`, or through instance metadata, the resulting
credentials need sufficient permissions to issue secrets. The necessary permissions
vary based on the ways roles are configured.
This is an example RAM policy that would allow you to create credentials using
any type of role:
```json
{
"Statement": [
{
"Action": [
"ram:CreateAccessKey",
"ram:DeleteAccessKey",
"ram:CreatePolicy",
"ram:DeletePolicy",
"ram:AttachPolicyToUser",
"ram:DetachPolicyFromUser",
"ram:CreateUser",
"ram:DeleteUser",
"sts:AssumeRole"
],
"Effect": "Allow",
"Resource": "*"
}
],
"Version": "1"
}
```
However, the policy you use should only allow the actions you actually need
for how your roles are configured.
If any roles are using `inline_policies`, you need the following actions:
- `"ram:CreateAccessKey"`
- `"ram:DeleteAccessKey"`
- `"ram:AttachPolicyToUser"`
- `"ram:DetachPolicyFromUser"`
- `"ram:CreateUser"`
- `"ram:DeleteUser"`
If any roles are using `remote_policies`, you need the following actions:
- All listed for `inline_policies`
- `"ram:CreatePolicy"`
- `"ram:DeletePolicy"`
If any roles are using `role_arn`, you need the following actions:
- `"sts:AssumeRole"`
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can generate credentials.
1. Generate a new access key by reading from the `/creds` endpoint with the name
of the role:
```text
$ vault read alicloud/creds/policy-based
Key Value
--- -----
lease_id alicloud/creds/policy-based/f3e92392-7d9c-09c8-c921-575d62fe80d8
lease_duration 768h
lease_renewable true
access_key 0wNEpMMlzy7szvai
secret_key PupkTg8jdmau1cXxYacgE736PJj4cA
```
The `access_key` and `secret_key` returned are also known is an
`"AccessKeyId"`and `"AccessKeySecret"`, respectively, in the Alibaba's
docs.
Retrieving creds for a role using a `role_arn` will carry the additional
fields of `expiration` and `security_token`, like so:
```text
$ vault read alicloud/creds/role-based
Key Value
--- -----
lease_id alicloud/creds/role-based/f3e92392-7d9c-09c8-c921-575d62fe80d9
lease_duration 59m59s
lease_renewable false
access_key STS.L4aBSCSJVMuKg5U1vFDw
secret_key wyLTSmsyPGP1ohvvw8xYgB29dlGI8KMiH2pKCNZ9
security_token CAESrAIIARKAAShQquMnLIlbvEcIxO6wCoqJufs8sWwieUxu45hS9AvKNEte8KRUWiJWJ6Y+YHAPgNwi7yfRecMFydL2uPOgBI7LDio0RkbYLmJfIxHM2nGBPdml7kYEOXmJp2aDhbvvwVYIyt/8iES/R6N208wQh0Pk2bu+/9dvalp6wOHF4gkFGhhTVFMuTDRhQlNDU0pWTXVLZzVVMXZGRHciBTQzMjc0KgVhbGljZTCpnJjwySk6BlJzYU1ENUJuCgExGmkKBUFsbG93Eh8KDEFjdGlvbkVxdWFscxIGQWN0aW9uGgcKBW9zczoqEj8KDlJlc291cmNlRXF1YWxzEghSZXNvdXJjZRojCiFhY3M6b3NzOio6NDMyNzQ6c2FtcGxlYm94L2FsaWNlLyo=
expiration 2018-08-15T21:58:00Z
```
## API
The AliCloud secrets engine has a full HTTP API. Please see the
[AliCloud secrets engine API](/vault/api-docs/secret/alicloud) for more
details. | vault | layout docs page title AliCloud Secrets Engines description The AliCloud secrets engine for Vault generates access tokens or STS credentials dynamically based on RAM policies or roles AliCloud secrets engine The AliCloud secrets engine dynamically generates AliCloud access tokens based on RAM policies or AliCloud STS credentials based on RAM roles This generally makes working with AliCloud easier since it does not involve clicking in the web UI The AliCloud access tokens are time based and are automatically revoked when the Vault lease expires STS credentials are short lived non renewable and expire on their own Setup Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the AliCloud secrets engine text vault secrets enable alicloud Success Enabled the alicloud secrets engine at alicloud By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 1 Create a custom policy https www alibabacloud com help doc detail 28640 htm in AliCloud that will be used for the access key you will give Vault See Example RAM Policy for Vault 1 Create a user https www alibabacloud com help faq detail 28637 htm in AliCloud with a name like hashicorp vault and directly apply the new custom policy to that user in the User Authorization Policies section 1 Create an access key for that user in AliCloud which is an action available in AliCloud s UI on the user s page 1 Configure that access key as the credentials that Vault will use to communicate with AliCloud to generate credentials text vault write alicloud config access key 0wNEpMMlzy7szvai secret key PupkTg8jdmau1cXxYacgE736PJj4cA Alternatively the AliCloud secrets engine can pick up credentials set as environment variables or credentials available through instance metadata Since it checks current credentials on every API call changes in credentials will be picked up almost immediately without a Vault restart If available we recommend using instance metadata for these credentials as they are the most secure option To do so simply ensure that the instance upon which Vault is running has sufficient privileges and do not add any config 1 Configure a role describing how credentials will be granted To generate access tokens using only policies that have already been created in AliCloud text vault write alicloud role policy based remote policies name AliyunOSSReadOnlyAccess type System remote policies name AliyunRDSReadOnlyAccess type System To generate access tokens using only policies that will be dynamically created in AliCloud by Vault text vault write alicloud role policy based inline policies EOF Statement Action rds Describe Effect Allow Resource Version 1 EOF Both inline policies and remote policies may be used together However neither may be used configuring how to generate STS credentials like so text vault write alibaba role role based role arn acs ram 5138828231865461 role hastrustedactors Any role arn specified must have added trusted actors when it was being created These can only be added at role creation time Trusted actors are entities that can assume the role Since we will be assuming the role to gain credentials the access key and secret key in the config must qualify as a trusted actor Helpful links More on roles https www alibabacloud com help doc detail 28649 htm More on policies https www alibabacloud com help doc detail 28652 htm Example RAM policy for Vault While AliCloud credentials can be supplied by environment variables an explicit setting in the alicloud config or through instance metadata the resulting credentials need sufficient permissions to issue secrets The necessary permissions vary based on the ways roles are configured This is an example RAM policy that would allow you to create credentials using any type of role json Statement Action ram CreateAccessKey ram DeleteAccessKey ram CreatePolicy ram DeletePolicy ram AttachPolicyToUser ram DetachPolicyFromUser ram CreateUser ram DeleteUser sts AssumeRole Effect Allow Resource Version 1 However the policy you use should only allow the actions you actually need for how your roles are configured If any roles are using inline policies you need the following actions ram CreateAccessKey ram DeleteAccessKey ram AttachPolicyToUser ram DetachPolicyFromUser ram CreateUser ram DeleteUser If any roles are using remote policies you need the following actions All listed for inline policies ram CreatePolicy ram DeletePolicy If any roles are using role arn you need the following actions sts AssumeRole Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can generate credentials 1 Generate a new access key by reading from the creds endpoint with the name of the role text vault read alicloud creds policy based Key Value lease id alicloud creds policy based f3e92392 7d9c 09c8 c921 575d62fe80d8 lease duration 768h lease renewable true access key 0wNEpMMlzy7szvai secret key PupkTg8jdmau1cXxYacgE736PJj4cA The access key and secret key returned are also known is an AccessKeyId and AccessKeySecret respectively in the Alibaba s docs Retrieving creds for a role using a role arn will carry the additional fields of expiration and security token like so text vault read alicloud creds role based Key Value lease id alicloud creds role based f3e92392 7d9c 09c8 c921 575d62fe80d9 lease duration 59m59s lease renewable false access key STS L4aBSCSJVMuKg5U1vFDw secret key wyLTSmsyPGP1ohvvw8xYgB29dlGI8KMiH2pKCNZ9 security token CAESrAIIARKAAShQquMnLIlbvEcIxO6wCoqJufs8sWwieUxu45hS9AvKNEte8KRUWiJWJ6Y YHAPgNwi7yfRecMFydL2uPOgBI7LDio0RkbYLmJfIxHM2nGBPdml7kYEOXmJp2aDhbvvwVYIyt 8iES R6N208wQh0Pk2bu 9dvalp6wOHF4gkFGhhTVFMuTDRhQlNDU0pWTXVLZzVVMXZGRHciBTQzMjc0KgVhbGljZTCpnJjwySk6BlJzYU1ENUJuCgExGmkKBUFsbG93Eh8KDEFjdGlvbkVxdWFscxIGQWN0aW9uGgcKBW9zczoqEj8KDlJlc291cmNlRXF1YWxzEghSZXNvdXJjZRojCiFhY3M6b3NzOio6NDMyNzQ6c2FtcGxlYm94L2FsaWNlLyo expiration 2018 08 15T21 58 00Z API The AliCloud secrets engine has a full HTTP API Please see the AliCloud secrets engine API vault api docs secret alicloud for more details |
vault page title TOTP Secrets Engines layout docs The TOTP secrets engine for Vault generates time based one time use passwords TOTP secrets engine The TOTP secrets engine generates time based credentials according to the TOTP standard The secrets engine can also be used to generate a new key and validate | ---
layout: docs
page_title: TOTP - Secrets Engines
description: The TOTP secrets engine for Vault generates time-based one-time use passwords.
---
# TOTP secrets engine
The TOTP secrets engine generates time-based credentials according to the TOTP
standard. The secrets engine can also be used to generate a new key and validate
passwords generated by that key.
The TOTP secrets engine can act as both a generator (like Google Authenticator)
and a provider (like the Google.com sign in service).
## As a generator
The TOTP secrets engine can act as a TOTP code generator. In this mode, it can
replace traditional TOTP generators like Google Authenticator. It provides an
added layer of security since the ability to generate codes is guarded by
policies and the entire process is audited.
### Setup
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. Enable the TOTP secrets engine:
```text
$ vault secrets enable totp
Success! Enabled the totp secrets engine at: totp/
```
By default, the secrets engine will mount at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure a named key. The name of this key will be a human identifier as to
its purpose.
```text
$ vault write totp/keys/my-key \
url="otpauth://totp/Vault:[email protected]?secret=Y64VEVMBTSXCYIWRSHRNDZW62MPGVU2G&issuer=Vault"
Success! Data written to: totp/keys/my-key
```
The `url` corresponds to the secret key or value from the barcode provided
by the third-party service.
### Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can generate credentials.
1. Generate a new time-based OTP by reading from the `/code` endpoint with the
name of the key:
```text
$ vault read totp/code/my-key
Key Value
--- -----
code 260610
```
Using ACLs, it is possible to restrict using the TOTP secrets engine such
that trusted operators can manage the key definitions, and both users and
applications are restricted in the credentials they are allowed to read.
## As a provider
The TOTP secrets engine can also act as a TOTP provider. In this mode, it can be
used to generate new keys and validate passwords generated using those keys.
### Setup
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. Enable the TOTP secrets engine:
```text
$ vault secrets enable totp
Success! Enabled the totp secrets engine at: totp/
```
By default, the secrets engine will mount at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Create a named key, using the `generate` option. This tells Vault to be the
provider:
```text
$ vault write totp/keys/my-user \
generate=true \
issuer=Vault \
[email protected]
Key Value
--- -----
barcode iVBORw0KGgoAAAANSUhEUgAAAMgAAADIEAAAAADYoy0BA...
url otpauth://totp/Vault:[email protected]?algorithm=SHA1&digits=6&issuer=Vault&period=30&secret=V7MBSK324I7KF6KVW34NDFH2GYHIF6JY
```
The response includes a base64-encoded barcode and OTP url. Both are
equivalent. Give these to the user who is authenticating with TOTP.
### Usage
1. As a user, validate a TOTP code generated by a third-party app:
```text
$ vault write totp/code/my-user code=886531
Key Value
--- -----
valid true
```
## API
The TOTP secrets engine has a full HTTP API. Please see the
[TOTP secrets engine API](/vault/api-docs/secret/totp) for more
details. | vault | layout docs page title TOTP Secrets Engines description The TOTP secrets engine for Vault generates time based one time use passwords TOTP secrets engine The TOTP secrets engine generates time based credentials according to the TOTP standard The secrets engine can also be used to generate a new key and validate passwords generated by that key The TOTP secrets engine can act as both a generator like Google Authenticator and a provider like the Google com sign in service As a generator The TOTP secrets engine can act as a TOTP code generator In this mode it can replace traditional TOTP generators like Google Authenticator It provides an added layer of security since the ability to generate codes is guarded by policies and the entire process is audited Setup Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the TOTP secrets engine text vault secrets enable totp Success Enabled the totp secrets engine at totp By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure a named key The name of this key will be a human identifier as to its purpose text vault write totp keys my key url otpauth totp Vault test test com secret Y64VEVMBTSXCYIWRSHRNDZW62MPGVU2G issuer Vault Success Data written to totp keys my key The url corresponds to the secret key or value from the barcode provided by the third party service Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can generate credentials 1 Generate a new time based OTP by reading from the code endpoint with the name of the key text vault read totp code my key Key Value code 260610 Using ACLs it is possible to restrict using the TOTP secrets engine such that trusted operators can manage the key definitions and both users and applications are restricted in the credentials they are allowed to read As a provider The TOTP secrets engine can also act as a TOTP provider In this mode it can be used to generate new keys and validate passwords generated using those keys Setup Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the TOTP secrets engine text vault secrets enable totp Success Enabled the totp secrets engine at totp By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 1 Create a named key using the generate option This tells Vault to be the provider text vault write totp keys my user generate true issuer Vault account name user test com Key Value barcode iVBORw0KGgoAAAANSUhEUgAAAMgAAADIEAAAAADYoy0BA url otpauth totp Vault user test com algorithm SHA1 digits 6 issuer Vault period 30 secret V7MBSK324I7KF6KVW34NDFH2GYHIF6JY The response includes a base64 encoded barcode and OTP url Both are equivalent Give these to the user who is authenticating with TOTP Usage 1 As a user validate a TOTP code generated by a third party app text vault write totp code my user code 886531 Key Value valid true API The TOTP secrets engine has a full HTTP API Please see the TOTP secrets engine API vault api docs secret totp for more details |
vault page title Nomad Secrets Engine Nomad secrets engine The Nomad secrets engine for Vault generates tokens for Nomad dynamically layout docs include x509 sha1 deprecation mdx | ---
layout: docs
page_title: Nomad Secrets Engine
description: The Nomad secrets engine for Vault generates tokens for Nomad dynamically.
---
# Nomad secrets engine
@include 'x509-sha1-deprecation.mdx'
Name: `Nomad`
Nomad is a simple, flexible scheduler and workload orchestrator. The Nomad secrets engine for Vault generates [Nomad](https://www.nomadproject.io/)
ACL tokens dynamically based on pre-existing Nomad ACL policies.
This page will show a quick start for this secrets engine. For detailed documentation
on every path, use `vault path-help` after mounting the secrets engine.
~> **Version information** ACLs are only available on Nomad 0.7.0 and above.
## Quick start
The first step to using the Vault secrets engine is to enable it.
```shell-session
$ vault secrets enable nomad
Successfully mounted 'nomad' at 'nomad'!
```
Optionally, we can configure the lease settings for credentials generated
by Vault. This is done by writing to the `config/lease` key:
```shell-session
$ vault write nomad/config/lease ttl=3600 max_ttl=86400
Success! Data written to: nomad/config/lease
```
For a quick start, you can use the SecretID token provided by the [Nomad ACL bootstrap
process](/nomad/tutorials/access-control#generate-the-initial-token), although this
is discouraged for production deployments.
```shell-session
$ nomad acl bootstrap
Accessor ID = 95a0ee55-eaa6-2c0a-a900-ed94c156754e
Secret ID = c25b6ca0-ea4e-000f-807a-fd03fcab6e3c
Name = Bootstrap Token
Type = management
Global = true
Policies = n/a
Create Time = 2017-09-20 19:40:36.527512364 +0000 UTC
Create Index = 7
Modify Index = 7
```
The suggested pattern is to generate a token specifically for Vault, following the
[Nomad ACL guide](/nomad/tutorials/access-control)
Next, we must configure Vault to know how to contact Nomad.
This is done by writing the access information:
```shell-session
$ vault write nomad/config/access \
address=http://127.0.0.1:4646 \
token=adf4238a-882b-9ddc-4a9d-5b6758e4159e
Success! Data written to: nomad/config/access
```
In this case, we've configured Vault to connect to Nomad
on the default port with the loopback address. We've also provided
an ACL token to use with the `token` parameter. Vault must have a management
type token so that it can create and revoke ACL tokens.
The next step is to configure a role. A role is a logical name that maps
to a set of policy names used to generate those credentials. For example, let's create
a "monitoring" role that maps to a "readonly" policy:
```shell-session
$ vault write nomad/role/monitoring policies=readonly
Success! Data written to: nomad/role/monitoring
```
The secrets engine expects either a single or a comma separated list of policy names.
To generate a new Nomad ACL token, we simply read from that role:
```shell-session
$ vault read nomad/creds/monitoring
Key Value
--- -----
lease_id nomad/creds/monitoring/78ec3ef3-c806-1022-4aa8-1dbae39c760c
lease_duration 768h0m0s
lease_renewable true
accessor_id a715994d-f5fd-1194-73df-ae9dad616307
secret_id b31fb56c-0936-5428-8c5f-ed010431aba9
```
Here we can see that Vault has generated a new Nomad ACL token for us.
We can test this token out, by reading it in Nomad (by it's accessor):
```shell-session
$ nomad acl token info a715994d-f5fd-1194-73df-ae9dad616307
Accessor ID = a715994d-f5fd-1194-73df-ae9dad616307
Secret ID = b31fb56c-0936-5428-8c5f-ed010431aba9
Name = Vault example root 1505945527022465593
Type = client
Global = false
Policies = [readonly]
Create Time = 2017-09-20 22:12:07.023455379 +0000 UTC
Create Index = 138
Modify Index = 138
```
## Tutorial
Refer to [Generate Nomad Tokens with HashiCorp
Vault](/nomad/tutorials/integrate-vault/vault-nomad-secrets) for a
step-by-step tutorial.
## API
The Nomad secrets engine has a full HTTP API. Please see the
[Nomad Secrets Engine API](/vault/api-docs/secret/nomad) for more
details. | vault | layout docs page title Nomad Secrets Engine description The Nomad secrets engine for Vault generates tokens for Nomad dynamically Nomad secrets engine include x509 sha1 deprecation mdx Name Nomad Nomad is a simple flexible scheduler and workload orchestrator The Nomad secrets engine for Vault generates Nomad https www nomadproject io ACL tokens dynamically based on pre existing Nomad ACL policies This page will show a quick start for this secrets engine For detailed documentation on every path use vault path help after mounting the secrets engine Version information ACLs are only available on Nomad 0 7 0 and above Quick start The first step to using the Vault secrets engine is to enable it shell session vault secrets enable nomad Successfully mounted nomad at nomad Optionally we can configure the lease settings for credentials generated by Vault This is done by writing to the config lease key shell session vault write nomad config lease ttl 3600 max ttl 86400 Success Data written to nomad config lease For a quick start you can use the SecretID token provided by the Nomad ACL bootstrap process nomad tutorials access control generate the initial token although this is discouraged for production deployments shell session nomad acl bootstrap Accessor ID 95a0ee55 eaa6 2c0a a900 ed94c156754e Secret ID c25b6ca0 ea4e 000f 807a fd03fcab6e3c Name Bootstrap Token Type management Global true Policies n a Create Time 2017 09 20 19 40 36 527512364 0000 UTC Create Index 7 Modify Index 7 The suggested pattern is to generate a token specifically for Vault following the Nomad ACL guide nomad tutorials access control Next we must configure Vault to know how to contact Nomad This is done by writing the access information shell session vault write nomad config access address http 127 0 0 1 4646 token adf4238a 882b 9ddc 4a9d 5b6758e4159e Success Data written to nomad config access In this case we ve configured Vault to connect to Nomad on the default port with the loopback address We ve also provided an ACL token to use with the token parameter Vault must have a management type token so that it can create and revoke ACL tokens The next step is to configure a role A role is a logical name that maps to a set of policy names used to generate those credentials For example let s create a monitoring role that maps to a readonly policy shell session vault write nomad role monitoring policies readonly Success Data written to nomad role monitoring The secrets engine expects either a single or a comma separated list of policy names To generate a new Nomad ACL token we simply read from that role shell session vault read nomad creds monitoring Key Value lease id nomad creds monitoring 78ec3ef3 c806 1022 4aa8 1dbae39c760c lease duration 768h0m0s lease renewable true accessor id a715994d f5fd 1194 73df ae9dad616307 secret id b31fb56c 0936 5428 8c5f ed010431aba9 Here we can see that Vault has generated a new Nomad ACL token for us We can test this token out by reading it in Nomad by it s accessor shell session nomad acl token info a715994d f5fd 1194 73df ae9dad616307 Accessor ID a715994d f5fd 1194 73df ae9dad616307 Secret ID b31fb56c 0936 5428 8c5f ed010431aba9 Name Vault example root 1505945527022465593 Type client Global false Policies readonly Create Time 2017 09 20 22 12 07 023455379 0000 UTC Create Index 138 Modify Index 138 Tutorial Refer to Generate Nomad Tokens with HashiCorp Vault nomad tutorials integrate vault vault nomad secrets for a step by step tutorial API The Nomad secrets engine has a full HTTP API Please see the Nomad Secrets Engine API vault api docs secret nomad for more details |
vault page title Consul Secrets Engines The Consul secrets engine for Vault generates tokens for Consul dynamically Consul secrets engine layout docs include x509 sha1 deprecation mdx | ---
layout: docs
page_title: Consul - Secrets Engines
description: The Consul secrets engine for Vault generates tokens for Consul dynamically.
---
# Consul secrets engine
@include 'x509-sha1-deprecation.mdx'
The Consul secrets engine generates [Consul](https://www.consul.io/) API tokens
dynamically based on Consul ACL policies.
-> **Note:** See the Consul Agent [config documentation](/consul/docs/agent/config/config-files#acl-parameters)
for details on how to enable Consul's ACL system.
## Setup
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. (Optional) If you're only looking to set up a quick test environment, you can start a
Consul Agent in dev mode in a separate terminal window.
```shell-session
$ consul agent -dev -hcl "acl { enabled = true }"
```
1. Enable the Consul secrets engine:
```shell-session
$ vault secrets enable consul
Success! Enabled the consul secrets engine at: consul/
```
By default, the secrets engine will mount at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure Vault to connect and authenticate to Consul.
Vault can bootstrap the Consul ACL system automatically if it is enabled and hasn't already
been bootstrapped. If you have already bootstrapped the ACL system, then you will need to
provide Vault with a management token. This can either be the bootstrap token or another
management token you've created yourself.
1. Configuring Vault without previously bootstrapping the Consul ACL system:
```shell-session
$ vault write consul/config/access \
address="127.0.0.1:8500"
Success! Data written to: consul/config/access
```
~> **Note:** Vault will silently store the bootstrap token as the configuration token when
it performs the automatic bootstrap; it will not be presented to the user. If you need
another management token, you will need to generate one by writing a Vault role with the
`global-management` policy and then reading new creds back from it.
1. Configuring Vault after manually bootstrapping the Consul ACL system:
1. For Consul 1.4 and above, use the command line to generate a token with the appropriate policy:
```shell-session
$ CONSUL_HTTP_TOKEN="<bootstrap-token>" consul acl token create -policy-name="global-management"
AccessorID: 865dc5e9-e585-3180-7b49-4ddc0fc45135
SecretID: ef35f0f1-885b-0cab-573c-7c91b65a7a7e
Description:
Local: false
Create Time: 2018-10-22 17:40:24.128188 -0700 PDT
Policies:
00000000-0000-0000-0000-000000000001 - global-management
```
```shell-session
$ vault write consul/config/access \
address="127.0.0.1:8500" \
token="ef35f0f1-885b-0cab-573c-7c91b65a7a7e"
Success! Data written to: consul/config/access
```
1. For Consul versions below 1.4, acquire a [management token][consul-mgmt-token] from Consul, using the
`acl_master_token` from your Consul configuration file or another management token:
```shell-session
$ curl \
--header "X-Consul-Token: my-management-token" \
--request PUT \
--data '{"Name": "sample", "Type": "management"}' \
https://consul.rocks/v1/acl/create
```
Vault must have a management type token so that it can create and revoke ACL
tokens. The response will return a new token:
```json
{
"ID": "7652ba4c-0f6e-8e75-5724-5e083d72cfe4"
}
```
1. Configure a role that maps a name in Vault to a Consul ACL policy. Depending on your Consul version,
you will either provide a policy document and a token type, a list of policies or roles, or a set of
service or node identities. When users generate credentials, they are generated against this role.
1. For Consul versions 1.8 and above, attach [a Consul node identity](/consul/commands/acl/token/create#node-identity) to the role.
```shell-session
$ vault write consul/roles/my-role \
node_identities="server-1:dc1" \
node_identities="server-2:dc1"
Success! Data written to: consul/roles/my-role
```
1. For Consul versions 1.5 and above, attach either [a role in Consul](/consul/api-docs/acl/roles) or [a Consul service identity](/consul/commands/acl/token/create#service-identity) to the role:
```shell-session
$ vault write consul/roles/my-role consul_roles="api-server"
Success! Data written to: consul/roles/my-role
```
```shell-session
$ vault write consul/roles/my-role \
service_identities="myservice-1:dc1,dc2" \
service_identities="myservice-2:dc1"
Success! Data written to: consul/roles/my-role
```
1. For Consul versions 1.4 and above, generate [a policy in Consul](/consul/tutorials/security/access-control-setup-production),
and proceed to link it to the role:
```shell-session
$ vault write consul/roles/my-role consul_policies="readonly"
Success! Data written to: consul/roles/my-role
```
1. For Consul versions below 1.4, the policy must be base64-encoded. The policy language is
[documented by Consul](/consul/docs/security/acl/acl-legacy). Support for this method is
deprecated as of Vault 1.11.
Write a policy and proceed to link it to the role:
```shell-session
$ vault write consul/roles/my-role policy="$(echo 'key "" { policy = "read" }' | base64)"
Success! Data written to: consul/roles/my-role
```
-> **Token lease duration:** If you do not specify a value for `ttl` (or `lease` for Consul versions below 1.4) the
tokens created using Vault's Consul secrets engine are created with a Time To Live (TTL) of 30 days. You can change
the lease duration by passing `-ttl=<duration>` to the command above where duration is a [duration format strings](/vault/docs/concepts/duration-format).
1. You may further limit a role's access by adding the optional parameters `consul_namespace` and
`partition`. Please refer to Consul's [namespace documentation](/consul/docs/enterprise/namespaces) and
[admin partition documentation](/consul/docs/enterprise/admin-partitions) for further information about
these features.
1. For Consul version 1.11 and above, link an admin partition to a role:
```shell-session
$ vault write consul/roles/my-role consul_roles="admin-management" partition="admin1"
Success! Data written to: consul/roles/my-role
```
1. For Consul versions 1.7 and above, link a Consul namespace to the role:
```shell-session
$ vault write consul/roles/my-role consul_roles="namespace-management" consul_namespace="ns1"
Success! Data written to: consul/roles/my-role
```
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can generate credentials.
Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```shell-session
$ vault read consul/creds/my-role
Key Value
--- -----
lease_id consul/creds/my-role/b2469121-f55f-53c5-89af-a3ba52b1d6d8
lease_duration 768h
lease_renewable true
accessor c81b9cf7-2c4f-afc7-1449-4e442b831f65
consul_namespace ns1
local false
partition admin1
token 642783bf-1540-526f-d4de-fe1ac1aed6f0
```
!> **Expired token rotation:** Once a token's TTL expires, then Consul operations will no longer be allowed with it.
This requires you to have an external process to rotate tokens. At this time, the recommended approach for operators
is to rotate the tokens manually by creating a new token using the `vault read consul/creds/my-role` command. Once
the token is synchronized with Consul, apply the token to the agents using the Consul API or CLI.
## Tutorial
Refer to [Administer Consul Access Control Tokens with
Vault](/consul/tutorials/vault-secure/vault-consul-secrets) for a
step-by-step tutorial.
## API
The Consul secrets engine has a full HTTP API. Please see the
[Consul secrets engine API](/vault/api-docs/secret/consul) for more
details.
[consul-mgmt-token]: /consul/api-docs/acl#acl_create | vault | layout docs page title Consul Secrets Engines description The Consul secrets engine for Vault generates tokens for Consul dynamically Consul secrets engine include x509 sha1 deprecation mdx The Consul secrets engine generates Consul https www consul io API tokens dynamically based on Consul ACL policies Note See the Consul Agent config documentation consul docs agent config config files acl parameters for details on how to enable Consul s ACL system Setup Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Optional If you re only looking to set up a quick test environment you can start a Consul Agent in dev mode in a separate terminal window shell session consul agent dev hcl acl enabled true 1 Enable the Consul secrets engine shell session vault secrets enable consul Success Enabled the consul secrets engine at consul By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure Vault to connect and authenticate to Consul Vault can bootstrap the Consul ACL system automatically if it is enabled and hasn t already been bootstrapped If you have already bootstrapped the ACL system then you will need to provide Vault with a management token This can either be the bootstrap token or another management token you ve created yourself 1 Configuring Vault without previously bootstrapping the Consul ACL system shell session vault write consul config access address 127 0 0 1 8500 Success Data written to consul config access Note Vault will silently store the bootstrap token as the configuration token when it performs the automatic bootstrap it will not be presented to the user If you need another management token you will need to generate one by writing a Vault role with the global management policy and then reading new creds back from it 1 Configuring Vault after manually bootstrapping the Consul ACL system 1 For Consul 1 4 and above use the command line to generate a token with the appropriate policy shell session CONSUL HTTP TOKEN bootstrap token consul acl token create policy name global management AccessorID 865dc5e9 e585 3180 7b49 4ddc0fc45135 SecretID ef35f0f1 885b 0cab 573c 7c91b65a7a7e Description Local false Create Time 2018 10 22 17 40 24 128188 0700 PDT Policies 00000000 0000 0000 0000 000000000001 global management shell session vault write consul config access address 127 0 0 1 8500 token ef35f0f1 885b 0cab 573c 7c91b65a7a7e Success Data written to consul config access 1 For Consul versions below 1 4 acquire a management token consul mgmt token from Consul using the acl master token from your Consul configuration file or another management token shell session curl header X Consul Token my management token request PUT data Name sample Type management https consul rocks v1 acl create Vault must have a management type token so that it can create and revoke ACL tokens The response will return a new token json ID 7652ba4c 0f6e 8e75 5724 5e083d72cfe4 1 Configure a role that maps a name in Vault to a Consul ACL policy Depending on your Consul version you will either provide a policy document and a token type a list of policies or roles or a set of service or node identities When users generate credentials they are generated against this role 1 For Consul versions 1 8 and above attach a Consul node identity consul commands acl token create node identity to the role shell session vault write consul roles my role node identities server 1 dc1 node identities server 2 dc1 Success Data written to consul roles my role 1 For Consul versions 1 5 and above attach either a role in Consul consul api docs acl roles or a Consul service identity consul commands acl token create service identity to the role shell session vault write consul roles my role consul roles api server Success Data written to consul roles my role shell session vault write consul roles my role service identities myservice 1 dc1 dc2 service identities myservice 2 dc1 Success Data written to consul roles my role 1 For Consul versions 1 4 and above generate a policy in Consul consul tutorials security access control setup production and proceed to link it to the role shell session vault write consul roles my role consul policies readonly Success Data written to consul roles my role 1 For Consul versions below 1 4 the policy must be base64 encoded The policy language is documented by Consul consul docs security acl acl legacy Support for this method is deprecated as of Vault 1 11 Write a policy and proceed to link it to the role shell session vault write consul roles my role policy echo key policy read base64 Success Data written to consul roles my role Token lease duration If you do not specify a value for ttl or lease for Consul versions below 1 4 the tokens created using Vault s Consul secrets engine are created with a Time To Live TTL of 30 days You can change the lease duration by passing ttl duration to the command above where duration is a duration format strings vault docs concepts duration format 1 You may further limit a role s access by adding the optional parameters consul namespace and partition Please refer to Consul s namespace documentation consul docs enterprise namespaces and admin partition documentation consul docs enterprise admin partitions for further information about these features 1 For Consul version 1 11 and above link an admin partition to a role shell session vault write consul roles my role consul roles admin management partition admin1 Success Data written to consul roles my role 1 For Consul versions 1 7 and above link a Consul namespace to the role shell session vault write consul roles my role consul roles namespace management consul namespace ns1 Success Data written to consul roles my role Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can generate credentials Generate a new credential by reading from the creds endpoint with the name of the role shell session vault read consul creds my role Key Value lease id consul creds my role b2469121 f55f 53c5 89af a3ba52b1d6d8 lease duration 768h lease renewable true accessor c81b9cf7 2c4f afc7 1449 4e442b831f65 consul namespace ns1 local false partition admin1 token 642783bf 1540 526f d4de fe1ac1aed6f0 Expired token rotation Once a token s TTL expires then Consul operations will no longer be allowed with it This requires you to have an external process to rotate tokens At this time the recommended approach for operators is to rotate the tokens manually by creating a new token using the vault read consul creds my role command Once the token is synchronized with Consul apply the token to the agents using the Consul API or CLI Tutorial Refer to Administer Consul Access Control Tokens with Vault consul tutorials vault secure vault consul secrets for a step by step tutorial API The Consul secrets engine has a full HTTP API Please see the Consul secrets engine API vault api docs secret consul for more details consul mgmt token consul api docs acl acl create |
vault page title Terraform Cloud Secret Backend The Terraform Cloud secret backend for Vault generates tokens for Terraform Cloud dynamically layout docs Name Terraform Cloud Terraform Cloud secret backend | ---
layout: docs
page_title: Terraform Cloud Secret Backend
description: The Terraform Cloud secret backend for Vault generates tokens for Terraform Cloud dynamically.
---
# Terraform Cloud secret backend
Name: `Terraform Cloud`
The Terraform Cloud secret backend for Vault generates
[Terraform Cloud](https://cloud.hashicorp.com/products/terraform)
API tokens dynamically for Organizations, Teams, and Users.
This page will show a quick start for this backend. For detailed documentation
on every path, use `vault path-help` after mounting the backend.
~> **Terraform Enterprise Support:** this secret engine supports both Terraform
Cloud ([app.terraform.io](https://app.terraform.io/session)) as well as on-prem
Terraform Enterprise. Any version requirements will be documented alongside the
features that require them, if any.
## Quick start
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. Enable the Terraform Cloud secrets engine:
```shell-session
$ vault secrets enable terraform
Success! Enabled the terraform cloud secrets engine at: terraform/
```
By default, the secrets engine will mount at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
2. Configure Vault to connect and authenticate to Terraform Cloud:
```shell-session
$ vault write terraform/config \
token=Vhz7652ba4c-0f6e-8e75-5724-5e083d72cfe4
Success! Data written to: terraform/config
```
See [Terraform Cloud's documentation on API
tokens](/terraform/cloud-docs/users-teams-organizations/api-tokens)
to determine the appropriate API token for use with the secret engine. In
order to perform all operations, a User API token is recommended.
3. Configure a role that maps a name in Vault to a Terraform Cloud User. At
this time the Terraform Cloud API does not allow dynamic user generation. As
a result this secret engine creates dynamic API tokens for an existing user,
and manages the lifecycle of that API token. You will need to know the User
ID in order to generate User API tokens for that user. You can use the
Terraform Cloud [Account
API](/terraform/cloud-docs/api-docs/account) to find the
desired User ID.
```shell-session
$ vault write terraform/role/my-role user_id=user-12345abcde
Success! Data written to: terraform/role/my-role
```
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can generate credentials.
Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```shell-session
$ vault read terraform/creds/my-role
Key Value
--- -----
lease_id terraform/creds/my-user/A_LEASE_ID_PdvmJjACTtKrY2I
lease_duration 180s
lease_renewable true
token TJFDSIFDSKFEKZX.FKFKA.akjlfdiouajlkdakadfiowe
token_id at-123acbdfask
```
## Organization, team, and user roles
Terraform Cloud supports three distinct types of API tokens; Organizations,
Teams, and Users. Each token type has distinct access levels and generation
workflows. A given Vault role can manage any one of the three types at a time,
however there are important differences to be aware of.
### Organization and team roles
The Terraform Cloud API limits both Organization and Team roles to **one active
token at any given time**. Generating a new Organization or Team API token by
reading the credentials in Vault or otherwise generating them on
[app.terraform.io](https://app.terraform.io/session) will effectively revoke **any**
existing API token for that Organization or Team.
Due to this behavior, Organization and Team API tokens created by Vault will be
stored and returned on future requests, until the credentials get rotated. This
is to prevent unintentional revocation of tokens that are currently in-use.
Below is an example of creating a Vault role to manage an Organization
API token and rotating the token:
```shell-session
$ vault write terraform/role/testing organization="${TF_ORGANIZATION}"
Success! Data written to: terraform/role/testing
$ vault write -f terraform/rotate-role/testing
Success! Data written to: terraform/rotate-role/testing
```
The API token is retrieved by reading the credentials for the role:
```
$ vault read terraform/creds/testing
Key Value
--- -----
organization hashicorp-vault-testing
role testing
token <example token>
token_id at-fqvtdTQ5kQWcjUfG
```
### User roles
Traditionally, Vault secret engines create dynamic users and dynamic credentials
along with them. At the time of writing, the Terraform Cloud API does not allow
for creating dynamic users. Instead, the Terraform Cloud secret engine creates
dynamic User API tokens by configuring a Vault role to manage an existing
Terraform Cloud user. The lifecycle of these tokens is managed by Vault and
will auto expire according to the configured TTL and max TTL of the Vault
role.
Below is an example of creating a Vault role to manage manage User API tokens:
```shell-session
$ vault write terraform/role/user-testing user_id="${TF_USER_ID}"
Success! Data written to: terraform/role/user-testing
```
The API token is retrieved by reading the credentials for the role:
```
$ vault read terraform/creds/user-testing
Key Value
--- -----
role user-testing
token <example token>
token_id at-fqvtdTQ5kQWcjUfG
```
Please see the [Terraform Cloud API
Token documentation for more
information](/terraform/cloud-docs/users-teams-organizations/api-tokens).
## Tutorial
Refer to [Terraform Cloud Secrets
Engine](/vault/tutorials/secrets-management/terraform-secrets-engine)
for a step-by-step tutorial.
## API
The Terraform Cloud secrets engine has a full HTTP API. Please see the
[Terraform Cloud secrets engine API](/vault/api-docs/secret/terraform) for more
details. | vault | layout docs page title Terraform Cloud Secret Backend description The Terraform Cloud secret backend for Vault generates tokens for Terraform Cloud dynamically Terraform Cloud secret backend Name Terraform Cloud The Terraform Cloud secret backend for Vault generates Terraform Cloud https cloud hashicorp com products terraform API tokens dynamically for Organizations Teams and Users This page will show a quick start for this backend For detailed documentation on every path use vault path help after mounting the backend Terraform Enterprise Support this secret engine supports both Terraform Cloud app terraform io https app terraform io session as well as on prem Terraform Enterprise Any version requirements will be documented alongside the features that require them if any Quick start Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the Terraform Cloud secrets engine shell session vault secrets enable terraform Success Enabled the terraform cloud secrets engine at terraform By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 2 Configure Vault to connect and authenticate to Terraform Cloud shell session vault write terraform config token Vhz7652ba4c 0f6e 8e75 5724 5e083d72cfe4 Success Data written to terraform config See Terraform Cloud s documentation on API tokens terraform cloud docs users teams organizations api tokens to determine the appropriate API token for use with the secret engine In order to perform all operations a User API token is recommended 3 Configure a role that maps a name in Vault to a Terraform Cloud User At this time the Terraform Cloud API does not allow dynamic user generation As a result this secret engine creates dynamic API tokens for an existing user and manages the lifecycle of that API token You will need to know the User ID in order to generate User API tokens for that user You can use the Terraform Cloud Account API terraform cloud docs api docs account to find the desired User ID shell session vault write terraform role my role user id user 12345abcde Success Data written to terraform role my role Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can generate credentials Generate a new credential by reading from the creds endpoint with the name of the role shell session vault read terraform creds my role Key Value lease id terraform creds my user A LEASE ID PdvmJjACTtKrY2I lease duration 180s lease renewable true token TJFDSIFDSKFEKZX FKFKA akjlfdiouajlkdakadfiowe token id at 123acbdfask Organization team and user roles Terraform Cloud supports three distinct types of API tokens Organizations Teams and Users Each token type has distinct access levels and generation workflows A given Vault role can manage any one of the three types at a time however there are important differences to be aware of Organization and team roles The Terraform Cloud API limits both Organization and Team roles to one active token at any given time Generating a new Organization or Team API token by reading the credentials in Vault or otherwise generating them on app terraform io https app terraform io session will effectively revoke any existing API token for that Organization or Team Due to this behavior Organization and Team API tokens created by Vault will be stored and returned on future requests until the credentials get rotated This is to prevent unintentional revocation of tokens that are currently in use Below is an example of creating a Vault role to manage an Organization API token and rotating the token shell session vault write terraform role testing organization TF ORGANIZATION Success Data written to terraform role testing vault write f terraform rotate role testing Success Data written to terraform rotate role testing The API token is retrieved by reading the credentials for the role vault read terraform creds testing Key Value organization hashicorp vault testing role testing token example token token id at fqvtdTQ5kQWcjUfG User roles Traditionally Vault secret engines create dynamic users and dynamic credentials along with them At the time of writing the Terraform Cloud API does not allow for creating dynamic users Instead the Terraform Cloud secret engine creates dynamic User API tokens by configuring a Vault role to manage an existing Terraform Cloud user The lifecycle of these tokens is managed by Vault and will auto expire according to the configured TTL and max TTL of the Vault role Below is an example of creating a Vault role to manage manage User API tokens shell session vault write terraform role user testing user id TF USER ID Success Data written to terraform role user testing The API token is retrieved by reading the credentials for the role vault read terraform creds user testing Key Value role user testing token example token token id at fqvtdTQ5kQWcjUfG Please see the Terraform Cloud API Token documentation for more information terraform cloud docs users teams organizations api tokens Tutorial Refer to Terraform Cloud Secrets Engine vault tutorials secrets management terraform secrets engine for a step by step tutorial API The Terraform Cloud secrets engine has a full HTTP API Please see the Terraform Cloud secrets engine API vault api docs secret terraform for more details |
vault RabbitMQ secrets engine The RabbitMQ secrets engine for Vault generates user credentials to access RabbitMQ page title RabbitMQ Secrets Engines layout docs | ---
layout: docs
page_title: RabbitMQ - Secrets Engines
description: >-
The RabbitMQ secrets engine for Vault generates user credentials to access
RabbitMQ.
---
# RabbitMQ secrets engine
The RabbitMQ secrets engine generates user credentials dynamically based on
configured permissions and virtual hosts. This means that services that need to
access a virtual host no longer need to hardcode credentials.
With every service accessing the messaging queue with unique credentials,
auditing is much easier when questionable data access is discovered. Easily
track issues down to a specific instance of a service based on the RabbitMQ
username.
Vault makes use both of its own internal revocation system as well as the
deleting RabbitMQ users when creating RabbitMQ users to ensure that users become
invalid within a reasonable time of the lease expiring.
## Setup
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. Enable the RabbitMQ secrets engine:
```text
$ vault secrets enable rabbitmq
Success! Enabled the rabbitmq secrets engine at: rabbitmq/
```
By default, the secrets engine will mount at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure the credentials that Vault uses to communicate with RabbitMQ to
generate credentials:
```text
$ vault write rabbitmq/config/connection \
connection_uri="http://localhost:15672" \
username="admin" \
password="password"
Success! Data written to: rabbitmq/config/connection
```
It is important that the Vault user have the administrator privilege to
manager users.
1. Configure a role that maps a name in Vault to virtual host permissions:
```text
$ vault write rabbitmq/roles/my-role \
vhosts='{"/":{"write": ".*", "read": ".*"}}'
Success! Data written to: rabbitmq/roles/my-role
```
By writing to the `roles/my-role` path we are defining the `my-role` role.
This role will be created by evaluating the given `vhosts`, `vhost_topics`
and `tags` statements. By default, no tags, no virtual hosts or topic
permissions are assigned to a role. If no topic permissions are defined
and the default authorisation backend is used, publishing to a topic
exchange or consuming from a topic is always authorised. You can read
more about [RabbitMQ management tags][rmq-perms]
and [RabbitMQ topic authorization][rmq-topics].
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can generate credentials.
1. Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```text
$ vault read rabbitmq/creds/my-role
Key Value
--- -----
lease_id rabbitmq/creds/my-role/I39Hu8XXOombof4wiK5bKMn9
lease_duration 768h
lease_renewable true
password 3yNDBikgQvrkx2VA2zhq5IdSM7IWk1RyMYJr
username root-39669250-3894-8032-c420-3d58483ebfc4
```
Using ACLs, it is possible to restrict using the rabbitmq secrets engine
such that trusted operators can manage the role definitions, and both users
and applications are restricted in the credentials they are allowed to read.
## API
The RabbitMQ secrets engine has a full HTTP API. Please see the
[RabbitMQ secrets engine API](/vault/api-docs/secret/rabbitmq) for more
details.
[rmq-perms]: https://www.rabbitmq.com/management.html#permissions
[rmq-topics]: https://www.rabbitmq.com/access-control.html#topic-authorisation | vault | layout docs page title RabbitMQ Secrets Engines description The RabbitMQ secrets engine for Vault generates user credentials to access RabbitMQ RabbitMQ secrets engine The RabbitMQ secrets engine generates user credentials dynamically based on configured permissions and virtual hosts This means that services that need to access a virtual host no longer need to hardcode credentials With every service accessing the messaging queue with unique credentials auditing is much easier when questionable data access is discovered Easily track issues down to a specific instance of a service based on the RabbitMQ username Vault makes use both of its own internal revocation system as well as the deleting RabbitMQ users when creating RabbitMQ users to ensure that users become invalid within a reasonable time of the lease expiring Setup Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the RabbitMQ secrets engine text vault secrets enable rabbitmq Success Enabled the rabbitmq secrets engine at rabbitmq By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure the credentials that Vault uses to communicate with RabbitMQ to generate credentials text vault write rabbitmq config connection connection uri http localhost 15672 username admin password password Success Data written to rabbitmq config connection It is important that the Vault user have the administrator privilege to manager users 1 Configure a role that maps a name in Vault to virtual host permissions text vault write rabbitmq roles my role vhosts write read Success Data written to rabbitmq roles my role By writing to the roles my role path we are defining the my role role This role will be created by evaluating the given vhosts vhost topics and tags statements By default no tags no virtual hosts or topic permissions are assigned to a role If no topic permissions are defined and the default authorisation backend is used publishing to a topic exchange or consuming from a topic is always authorised You can read more about RabbitMQ management tags rmq perms and RabbitMQ topic authorization rmq topics Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can generate credentials 1 Generate a new credential by reading from the creds endpoint with the name of the role text vault read rabbitmq creds my role Key Value lease id rabbitmq creds my role I39Hu8XXOombof4wiK5bKMn9 lease duration 768h lease renewable true password 3yNDBikgQvrkx2VA2zhq5IdSM7IWk1RyMYJr username root 39669250 3894 8032 c420 3d58483ebfc4 Using ACLs it is possible to restrict using the rabbitmq secrets engine such that trusted operators can manage the role definitions and both users and applications are restricted in the credentials they are allowed to read API The RabbitMQ secrets engine has a full HTTP API Please see the RabbitMQ secrets engine API vault api docs secret rabbitmq for more details rmq perms https www rabbitmq com management html permissions rmq topics https www rabbitmq com access control html topic authorisation |
vault Azure secrets engine page title Azure Secrets Engine The Azure Vault secrets engine dynamically generates Azure layout docs service principals and role assignments | ---
layout: docs
page_title: Azure - Secrets Engine
description: |-
The Azure Vault secrets engine dynamically generates Azure
service principals and role assignments.
---
# Azure secrets engine
The Azure secrets engine dynamically generates Azure service principals along
with role and group assignments. Vault roles can be mapped to one or more Azure
roles, and optionally group assignments, providing a simple, flexible way to
manage the permissions granted to generated service principals.
Each service principal is associated with a Vault lease. When the lease expires
(either during normal revocation or through early revocation), the service
principal is automatically deleted.
If an existing service principal is specified as part of the role configuration,
a new password will be dynamically generated instead of a new service principal.
The password will be deleted when the lease is revoked.
## Setup
<Note>
You can configure the Azure secrets engine with the Vault API or
established environment variables such as `AZURE_CLIENT_ID` or
`AZURE_CLIENT_SECRET`. If you use both methods, note that
environment variables always take precedence over API values.
</Note>
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. Enable the Azure secrets engine:
```shell
$ vault secrets enable azure
Success! Enabled the azure secrets engine at: azure/
```
By default, the secrets engine will mount at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure the secrets engine with account credentials:
```shell
$ vault write azure/config \
subscription_id=$AZURE_SUBSCRIPTION_ID \
tenant_id=$AZURE_TENANT_ID \
client_id=$AZURE_CLIENT_ID \
client_secret=$AZURE_CLIENT_SECRET
Success! Data written to: azure/config
```
If you are running Vault inside an Azure VM with MSI enabled, `client_id` and
`client_secret` may be omitted. For more information on authentication, see the [authentication](#authentication) section below.
In some cases, you cannot set sensitive account credentials in your
Vault configuration. For example, your organization may require that all
security credentials are short-lived or explicitly tied to a machine identity.
To provide managed identity security credentials to Vault, we recommend using Vault
[plugin workload identity federation](#plugin-workload-identity-federation-wif)
(WIF) as shown below.
1. Alternatively, configure the audience claim value and the Client, Tenant and Subscription IDs for plugin workload identity federation:
```text
$ vault write azure/config \
subscription_id=$AZURE_SUBSCRIPTION_ID \
tenant_id=$AZURE_TENANT_ID \
client_id=$AZURE_CLIENT_ID \
identity_token_audience=$TOKEN_AUDIENCE
```
The Vault identity token provider signs the plugin identity token JWT internally.
If a trust relationship exists between Vault and Azure through WIF, the secrets
engine can exchange the Vault identity token for a federated access token.
To configure a trusted relationship between Vault and Azure:
- You must configure the [identity token issuer backend](/vault/api-docs/secret/identity/tokens#configure-the-identity-tokens-backend)
for Vault.
- Azure must have a
[federated identity credential](https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation-create-trust?pivots=identity-wif-apps-methods-azp#configure-a-federated-identity-credential-on-an-app)
configured with information about the fully qualified and network-reachable
issuer URL for the Vault plugin
[identity token provider](/vault/api-docs/secret/identity/tokens#read-plugin-identity-well-known-configurations).
Establishing a trusted relationship between Vault and Azure ensures that Azure
can fetch JWKS
[public keys](/vault/api-docs/secret/identity/tokens#read-active-public-keys)
and verify the plugin identity token signature.
1. Configure a role. A role may be set up with either an existing service principal, or
a set of Azure roles that will be assigned to a dynamically created service principal.
To configure a role called "my-role" with an existing service principal:
```shell-session
$ vault write azure/roles/my-role \
application_object_id=<existing_app_obj_id> \
ttl=1h
```
Alternatively, to configure the role to create a new service principal with Azure roles:
```shell-session
$ vault write azure/roles/my-role ttl=1h azure_roles=-<<EOF
[
{
"role_name": "Contributor",
"scope": "/subscriptions/<uuid>/resourceGroups/Website"
}
]
EOF
```
Roles may also have their own TTL configuration that is separate from the mount's
TTL. For more information on roles see the [roles](#roles) section below.
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permissions, it can generate credentials. The usage pattern is the same
whether an existing or dynamic service principal is used.
To generate a credential using the "my-role" role:
```shell-session
$ vault read azure/creds/my-role
Key Value
--- -----
lease_id azure/creds/sp_role/1afd0969-ad23-73e2-f974-962f7ac1c2b4
lease_duration 60m
lease_renewable true
client_id 408bf248-dd4e-4be5-919a-7f6207a307ab
client_secret ad06228a-2db9-4e0a-8a5d-e047c7f32594
```
This endpoint generates a renewable set of credentials. The application can login
using the `client_id`/`client_secret` and will have access provided by configured service
principal or the Azure roles set in the "my-role" configuration.
## Root credential rotation
If the mount is configured with credentials directly, the credential's key may be
rotated to a Vault-generated value that is not accessible by the operator.
This will ensure that only Vault is able to access the "root" user that Vault uses to
manipulate dynamic & static credentials.
```shell-session
vault write -f azure/rotate-root
```
For more details on this operation, please see the
[Root Credential Rotation](/vault/api-docs/secret/azure#rotate-root) API docs.
## Roles
Vault roles let you configure either an existing service principal or a set of Azure roles, along with
role-specific TTL parameters. If an existing service principal is not provided, the configured Azure
roles will be assigned to a newly created service principal. The Vault role may optionally specify
role-specific `ttl` and/or `max_ttl` values. When the lease is created, the more restrictive of the
mount or role TTL value will be used.
### Application object IDs
If an existing service principal is to be used, the Application Object ID must be set on the Vault role.
This ID can be found by inspecting the desired Application with the `az` CLI tool, or via the Azure Portal. Note
that the Application **Object** ID must be provided, not the Application ID.
### Azure roles
If dynamic service principals are used, Azure roles must be configured on the Vault role.
Azure roles are provided as a JSON list, with each element describing an Azure role and scope to be assigned.
Azure roles may be specified using the `role_name` parameter ("Owner"), or `role_id`
("/subscriptions/.../roleDefinitions/...").
`role_id` is the definitive ID that's used during Vault operation; `role_name` is a convenience during
role management operations. All roles _must exist_ when the configuration is written or the operation will fail. The role lookup priority is:
1. If `role_id` is provided, it is validated and the corresponding `role_name` updated.
1. If only `role_name` is provided, a case-insensitive search-by-name is made, succeeding
only if _exactly one_ matching role is found. The `role_id` field will updated with the matching role ID.
The `scope` must be provided for every role assignment.
### Azure groups
If dynamic service principals are used, a list of Azure groups may be configured on the Vault role.
When the service principal is created, it will be assigned to these groups. Similar to the format used
for specifying Azure roles, Azure groups may be referenced by either their `group_name` or `object_id`.
Group specification by name must yield a single matching group.
Example of role configuration:
```shell-session
$ vault write azure/roles/my-role \
ttl=1h \
max_ttl=24h \
azure_roles=@az_roles.json \
azure_groups=@az_groups.json
$ cat az_roles.json
[
{
"role_name": "Contributor",
"scope": "/subscriptions/<uuid>/resourceGroups/Website"
},
{
"role_id": "/subscriptions/<uuid>/providers/Microsoft.Authorization/roleDefinitions/<uuid>",
"scope": "/subscriptions/<uuid>"
},
{
"role_name": "This won't matter as it will be overwritten",
"role_id": "/subscriptions/<uuid>/providers/Microsoft.Authorization/roleDefinitions/<uuid>",
"scope": "/subscriptions/<uuid>/resourceGroups/Database"
}
]
$ cat az_groups.json
[
{
"group_name": "foo"
},
{
"group_name": "This won't matter as it will be overwritten",
"object_id": "a6a834a6-36c3-4575-8e2b-05095963d603"
}
]
```
### Permanently delete Azure objects
If dynamic service principals are used, the option to permanently delete the applications and service principals created by Vault may be configured on the Vault role.
When this option is enabled and a lease is expired or revoked, the application and service principal associated with the lease will be [permanently deleted](https://docs.microsoft.com/en-us/graph/api/directory-deleteditems-delete) from the Azure Active Directory.
As a result, these objects will not count toward the [quota](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits#active-directory-limits) of total resources in an Azure tenant. When this option is not enabled
and a lease is expired or revoked, the application and service principal associated with the lease will be deleted, but not permanently. These objects will be available to restore for 30 days from deletion.
Example of role configuration:
```shell-session
$ vault write azure/roles/my-role permanently_delete=true ttl=1h azure_roles=-<<EOF
[
{
"role_name": "Contributor",
"scope": "/subscriptions/<uuid>/resourceGroups/Website"
}
]
EOF
```
## Authentication
The Azure secrets backend must have sufficient permissions to read Azure role information and manage
service principals. The authentication parameters can be set in the backend configuration or as environment
variables. Environment variables will take precedence. The individual parameters are described in the
[configuration][config] section of the API docs.
If the client ID or secret are not present and Vault is running on an Azure VM, Vault will attempt to use
[Managed Service Identity (MSI)](https://docs.microsoft.com/en-us/azure/active-directory/managed-service-identity/overview)
to access Azure. Note that when MSI is used, tenant and subscription IDs must still be explicitly provided
in the configuration or environment variables.
### MS Graph API permissions
The following MS Graph [API permissions](https://learn.microsoft.com/en-us/azure/active-directory/develop/permissions-consent-overview#types-of-permissions)
must be assigned to the service principal provided to Vault for managing Azure. The permissions
differ depending on if you're using [dynamic or existing](#choosing-between-dynamic-or-existing-service-principals)
service principals.
#### Dynamic Service Principals
| Permission Name | Type |
| ----------------------------- | ----------- |
| Application.ReadWrite.OwnedBy | Application |
| GroupMember.ReadWrite.All | Application |
~> **Note**: If you plan to use the [rotate root](/vault/api-docs/secret/azure#rotate-root)
credentials API, you'll need to change `Application.ReadWrite.OwnedBy` to `Application.ReadWrite.All`.
#### Existing Service Principals
| Permission Name | Type |
| ----------------------------- | ----------- |
| Application.ReadWrite.All | Application |
| GroupMember.ReadWrite.All | Application |
### Role assignments
The following Azure [role assignments](https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-cli)
must be granted in order for the secrets engine to manage role assignments for service
principles it creates.
| Role | Scope | Security Principal |
|------------------------------------------------| ------------ | ------------------------------------------- |
| [User Access Administrator][user_access_admin] | Subscription | Service Principal ID given in configuration |
## Plugin Workload Identity Federation (WIF)
<EnterpriseAlert product="vault" />
The Azure secrets engine supports the plugin WIF workflow, and has a source of identity called
a plugin identity token. The plugin identity token is a JWT that is signed internally by Vault's
[plugin identity token issuer](/vault/api-docs/secret/identity/tokens#read-plugin-workload-identity-issuer-s-openid-configuration).
If there is a trust relationship configured between Vault and Azure through
[workload identity federation](https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation),
the secrets engine can exchange its identity token for short-lived access tokens needed to
perform its actions.
Exchanging identity tokens for access tokens lets the Azure secrets engine
operate without configuring explicit access to sensitive client credentials.
To configure the secrets engine to use plugin WIF:
1. Ensure that Vault [openid-configuration](/vault/api-docs/secret/identity/tokens#read-plugin-identity-token-issuer-s-openid-configuration)
and [public JWKS](/vault/api-docs/secret/identity/tokens#read-plugin-identity-token-issuer-s-public-jwks)
APIs are network-reachable by Azure. We recommend using an API proxy or gateway
if you need to limit Vault API exposure.
1. Configure a
[federated identity credential](https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation-create-trust?pivots=identity-wif-apps-methods-azp#configure-a-federated-identity-credential-on-an-app)
on a dedicated application registration in Azure to establish a trust relationship with Vault.
1. The issuer URL **must** point at your [Vault plugin identity token issuer](/vault/api-docs/secret/identity/tokens#read-plugin-workload-identity-issuer-s-openid-configuration) with the
`/.well-known/openid-configuration` suffix removed. For example:
`https://host:port/v1/identity/oidc/plugins`.
1. The subject identifier **must** match the unique `sub` claim issued by plugin identity tokens.
The subject identifier should have the form `plugin-identity:<NAMESPACE>:secret:<AZURE_MOUNT_ACCESSOR>`.
1. The audience should be under 600 characters. The default value in Azure is `api://AzureADTokenExchange`.
1. Configure the Azure secrets engine with the subscription, client and tenant IDs and the OIDC audience value.
```shell-session
$ vault write azure/config \
subscription_id=$AZURE_SUBSCRIPTION_ID \
tenant_id=$AZURE_TENANT_ID \
client_id=$AZURE_CLIENT_ID \
identity_token_audience="vault.example/v1/identity/oidc/plugins"
```
Your secrets engine can now use plugin WIF for its configuration credentials.
By default, WIF [credentials](https://learn.microsoft.com/en-us/entra/identity-platform/access-tokens#token-lifetime)
have a time-to-live of 1 hour and automatically refresh when they expire.
Please see the [API documentation](/vault/api-docs/secret/azure#configure-access)
for more details on the fields associated with plugin WIF.
## Choosing between dynamic or existing service principals
Dynamic service principals are preferred if the desired Azure resources can be provided
via the RBAC system and Azure roles defined in the Vault role. This form of credential is
completely decoupled from any other clients, is not subject to permission changes after
issuance, and offers the best audit granularity.
Access to some Azure services cannot be provided with the RBAC system, however. In these
cases, an existing service principal can be set up with the necessary access, and Vault
can create new passwords for this service principal. Any changes to the service principal
permissions affect all clients. Furthermore, Azure does not provide any logging with
regard to _which_ credential was used for an operation.
An important limitation when using an existing service principal is that Azure limits the
number of passwords for a single Application. This limit is based on Application object
size and isn't firmly specified, but in practice hundreds of passwords can be issued per
Application. An error will be returned if the object size is reached. This limit can be
managed by reducing the role TTL, or by creating another Vault role against a different
Azure service principal configured with the same permissions.
## Additional notes
- **If a referenced Azure role doesn't exist, a credential will not be generated.**
Service principals will only be generated if _all_ role assignments are successful.
This is important to note if you're using custom Azure role definitions that might be deleted
at some point.
- Azure roles are assigned only once, when the service principal is created. If the
Vault role changes the list of Azure roles, these changes will not be reflected in
any existing service principal, even after token renewal.
- The time required to issue a credential is roughly proportional to the number of
Azure roles that must be assigned. This operation make take some time (10s of seconds
are common, and over a minute has been seen).
- Service principal credential timeouts are not used. Vault will revoke access by
deleting the service principal.
- The Application Name for dynamic service principals will be prefixed with `vault-`. Similarly
the `keyId` of any passwords added to an existing service principal will begin with
`ffffff`. These may be used to search for Vault-created credentials using the `az` tool
or Portal.
## Azure debug logs
The Azure secret engine plugin supports debug logging which includes additional information
about requests and responses from the Azure API.
To enable the Azure debug logs, set the `AZURE_SDK_GO_LOGGING` environment variable to `all` on your Vault
server:
```shell
AZURE_SDK_GO_LOGGING=all
```
## Help & support
The Azure secrets engine is written as an external Vault plugin and
thus exists outside the main Vault repository. It is automatically bundled with
Vault releases, but the code is managed separately.
Please report issues, add feature requests, and submit contributions to the
[vault-plugin-secrets-azure repo][repo] on GitHub.
## Tutorial
Refer to the [Azure Secrets
Engine](/vault/tutorials/secrets-management/azure-secrets) tutorial
to learn how to use the Azure secrets engine to dynamically generate Azure credentials.
## API
The Azure secrets engine has a full HTTP API. Please see the [Azure secrets engine API docs][api]
for more details.
[api]: /vault/api-docs/secret/azure
[config]: /vault/api-docs/secret/azure#configure-access
[repo]: https://github.com/hashicorp/vault-plugin-secrets-azure
[user_access_admin]: https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#user-access-administrator | vault | layout docs page title Azure Secrets Engine description The Azure Vault secrets engine dynamically generates Azure service principals and role assignments Azure secrets engine The Azure secrets engine dynamically generates Azure service principals along with role and group assignments Vault roles can be mapped to one or more Azure roles and optionally group assignments providing a simple flexible way to manage the permissions granted to generated service principals Each service principal is associated with a Vault lease When the lease expires either during normal revocation or through early revocation the service principal is automatically deleted If an existing service principal is specified as part of the role configuration a new password will be dynamically generated instead of a new service principal The password will be deleted when the lease is revoked Setup Note You can configure the Azure secrets engine with the Vault API or established environment variables such as AZURE CLIENT ID or AZURE CLIENT SECRET If you use both methods note that environment variables always take precedence over API values Note Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the Azure secrets engine shell vault secrets enable azure Success Enabled the azure secrets engine at azure By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure the secrets engine with account credentials shell vault write azure config subscription id AZURE SUBSCRIPTION ID tenant id AZURE TENANT ID client id AZURE CLIENT ID client secret AZURE CLIENT SECRET Success Data written to azure config If you are running Vault inside an Azure VM with MSI enabled client id and client secret may be omitted For more information on authentication see the authentication authentication section below In some cases you cannot set sensitive account credentials in your Vault configuration For example your organization may require that all security credentials are short lived or explicitly tied to a machine identity To provide managed identity security credentials to Vault we recommend using Vault plugin workload identity federation plugin workload identity federation wif WIF as shown below 1 Alternatively configure the audience claim value and the Client Tenant and Subscription IDs for plugin workload identity federation text vault write azure config subscription id AZURE SUBSCRIPTION ID tenant id AZURE TENANT ID client id AZURE CLIENT ID identity token audience TOKEN AUDIENCE The Vault identity token provider signs the plugin identity token JWT internally If a trust relationship exists between Vault and Azure through WIF the secrets engine can exchange the Vault identity token for a federated access token To configure a trusted relationship between Vault and Azure You must configure the identity token issuer backend vault api docs secret identity tokens configure the identity tokens backend for Vault Azure must have a federated identity credential https learn microsoft com en us entra workload id workload identity federation create trust pivots identity wif apps methods azp configure a federated identity credential on an app configured with information about the fully qualified and network reachable issuer URL for the Vault plugin identity token provider vault api docs secret identity tokens read plugin identity well known configurations Establishing a trusted relationship between Vault and Azure ensures that Azure can fetch JWKS public keys vault api docs secret identity tokens read active public keys and verify the plugin identity token signature 1 Configure a role A role may be set up with either an existing service principal or a set of Azure roles that will be assigned to a dynamically created service principal To configure a role called my role with an existing service principal shell session vault write azure roles my role application object id existing app obj id ttl 1h Alternatively to configure the role to create a new service principal with Azure roles shell session vault write azure roles my role ttl 1h azure roles EOF role name Contributor scope subscriptions uuid resourceGroups Website EOF Roles may also have their own TTL configuration that is separate from the mount s TTL For more information on roles see the roles roles section below Usage After the secrets engine is configured and a user machine has a Vault token with the proper permissions it can generate credentials The usage pattern is the same whether an existing or dynamic service principal is used To generate a credential using the my role role shell session vault read azure creds my role Key Value lease id azure creds sp role 1afd0969 ad23 73e2 f974 962f7ac1c2b4 lease duration 60m lease renewable true client id 408bf248 dd4e 4be5 919a 7f6207a307ab client secret ad06228a 2db9 4e0a 8a5d e047c7f32594 This endpoint generates a renewable set of credentials The application can login using the client id client secret and will have access provided by configured service principal or the Azure roles set in the my role configuration Root credential rotation If the mount is configured with credentials directly the credential s key may be rotated to a Vault generated value that is not accessible by the operator This will ensure that only Vault is able to access the root user that Vault uses to manipulate dynamic static credentials shell session vault write f azure rotate root For more details on this operation please see the Root Credential Rotation vault api docs secret azure rotate root API docs Roles Vault roles let you configure either an existing service principal or a set of Azure roles along with role specific TTL parameters If an existing service principal is not provided the configured Azure roles will be assigned to a newly created service principal The Vault role may optionally specify role specific ttl and or max ttl values When the lease is created the more restrictive of the mount or role TTL value will be used Application object IDs If an existing service principal is to be used the Application Object ID must be set on the Vault role This ID can be found by inspecting the desired Application with the az CLI tool or via the Azure Portal Note that the Application Object ID must be provided not the Application ID Azure roles If dynamic service principals are used Azure roles must be configured on the Vault role Azure roles are provided as a JSON list with each element describing an Azure role and scope to be assigned Azure roles may be specified using the role name parameter Owner or role id subscriptions roleDefinitions role id is the definitive ID that s used during Vault operation role name is a convenience during role management operations All roles must exist when the configuration is written or the operation will fail The role lookup priority is 1 If role id is provided it is validated and the corresponding role name updated 1 If only role name is provided a case insensitive search by name is made succeeding only if exactly one matching role is found The role id field will updated with the matching role ID The scope must be provided for every role assignment Azure groups If dynamic service principals are used a list of Azure groups may be configured on the Vault role When the service principal is created it will be assigned to these groups Similar to the format used for specifying Azure roles Azure groups may be referenced by either their group name or object id Group specification by name must yield a single matching group Example of role configuration shell session vault write azure roles my role ttl 1h max ttl 24h azure roles az roles json azure groups az groups json cat az roles json role name Contributor scope subscriptions uuid resourceGroups Website role id subscriptions uuid providers Microsoft Authorization roleDefinitions uuid scope subscriptions uuid role name This won t matter as it will be overwritten role id subscriptions uuid providers Microsoft Authorization roleDefinitions uuid scope subscriptions uuid resourceGroups Database cat az groups json group name foo group name This won t matter as it will be overwritten object id a6a834a6 36c3 4575 8e2b 05095963d603 Permanently delete Azure objects If dynamic service principals are used the option to permanently delete the applications and service principals created by Vault may be configured on the Vault role When this option is enabled and a lease is expired or revoked the application and service principal associated with the lease will be permanently deleted https docs microsoft com en us graph api directory deleteditems delete from the Azure Active Directory As a result these objects will not count toward the quota https docs microsoft com en us azure azure resource manager management azure subscription service limits active directory limits of total resources in an Azure tenant When this option is not enabled and a lease is expired or revoked the application and service principal associated with the lease will be deleted but not permanently These objects will be available to restore for 30 days from deletion Example of role configuration shell session vault write azure roles my role permanently delete true ttl 1h azure roles EOF role name Contributor scope subscriptions uuid resourceGroups Website EOF Authentication The Azure secrets backend must have sufficient permissions to read Azure role information and manage service principals The authentication parameters can be set in the backend configuration or as environment variables Environment variables will take precedence The individual parameters are described in the configuration config section of the API docs If the client ID or secret are not present and Vault is running on an Azure VM Vault will attempt to use Managed Service Identity MSI https docs microsoft com en us azure active directory managed service identity overview to access Azure Note that when MSI is used tenant and subscription IDs must still be explicitly provided in the configuration or environment variables MS Graph API permissions The following MS Graph API permissions https learn microsoft com en us azure active directory develop permissions consent overview types of permissions must be assigned to the service principal provided to Vault for managing Azure The permissions differ depending on if you re using dynamic or existing choosing between dynamic or existing service principals service principals Dynamic Service Principals Permission Name Type Application ReadWrite OwnedBy Application GroupMember ReadWrite All Application Note If you plan to use the rotate root vault api docs secret azure rotate root credentials API you ll need to change Application ReadWrite OwnedBy to Application ReadWrite All Existing Service Principals Permission Name Type Application ReadWrite All Application GroupMember ReadWrite All Application Role assignments The following Azure role assignments https learn microsoft com en us azure role based access control role assignments cli must be granted in order for the secrets engine to manage role assignments for service principles it creates Role Scope Security Principal User Access Administrator user access admin Subscription Service Principal ID given in configuration Plugin Workload Identity Federation WIF EnterpriseAlert product vault The Azure secrets engine supports the plugin WIF workflow and has a source of identity called a plugin identity token The plugin identity token is a JWT that is signed internally by Vault s plugin identity token issuer vault api docs secret identity tokens read plugin workload identity issuer s openid configuration If there is a trust relationship configured between Vault and Azure through workload identity federation https learn microsoft com en us entra workload id workload identity federation the secrets engine can exchange its identity token for short lived access tokens needed to perform its actions Exchanging identity tokens for access tokens lets the Azure secrets engine operate without configuring explicit access to sensitive client credentials To configure the secrets engine to use plugin WIF 1 Ensure that Vault openid configuration vault api docs secret identity tokens read plugin identity token issuer s openid configuration and public JWKS vault api docs secret identity tokens read plugin identity token issuer s public jwks APIs are network reachable by Azure We recommend using an API proxy or gateway if you need to limit Vault API exposure 1 Configure a federated identity credential https learn microsoft com en us entra workload id workload identity federation create trust pivots identity wif apps methods azp configure a federated identity credential on an app on a dedicated application registration in Azure to establish a trust relationship with Vault 1 The issuer URL must point at your Vault plugin identity token issuer vault api docs secret identity tokens read plugin workload identity issuer s openid configuration with the well known openid configuration suffix removed For example https host port v1 identity oidc plugins 1 The subject identifier must match the unique sub claim issued by plugin identity tokens The subject identifier should have the form plugin identity NAMESPACE secret AZURE MOUNT ACCESSOR 1 The audience should be under 600 characters The default value in Azure is api AzureADTokenExchange 1 Configure the Azure secrets engine with the subscription client and tenant IDs and the OIDC audience value shell session vault write azure config subscription id AZURE SUBSCRIPTION ID tenant id AZURE TENANT ID client id AZURE CLIENT ID identity token audience vault example v1 identity oidc plugins Your secrets engine can now use plugin WIF for its configuration credentials By default WIF credentials https learn microsoft com en us entra identity platform access tokens token lifetime have a time to live of 1 hour and automatically refresh when they expire Please see the API documentation vault api docs secret azure configure access for more details on the fields associated with plugin WIF Choosing between dynamic or existing service principals Dynamic service principals are preferred if the desired Azure resources can be provided via the RBAC system and Azure roles defined in the Vault role This form of credential is completely decoupled from any other clients is not subject to permission changes after issuance and offers the best audit granularity Access to some Azure services cannot be provided with the RBAC system however In these cases an existing service principal can be set up with the necessary access and Vault can create new passwords for this service principal Any changes to the service principal permissions affect all clients Furthermore Azure does not provide any logging with regard to which credential was used for an operation An important limitation when using an existing service principal is that Azure limits the number of passwords for a single Application This limit is based on Application object size and isn t firmly specified but in practice hundreds of passwords can be issued per Application An error will be returned if the object size is reached This limit can be managed by reducing the role TTL or by creating another Vault role against a different Azure service principal configured with the same permissions Additional notes If a referenced Azure role doesn t exist a credential will not be generated Service principals will only be generated if all role assignments are successful This is important to note if you re using custom Azure role definitions that might be deleted at some point Azure roles are assigned only once when the service principal is created If the Vault role changes the list of Azure roles these changes will not be reflected in any existing service principal even after token renewal The time required to issue a credential is roughly proportional to the number of Azure roles that must be assigned This operation make take some time 10s of seconds are common and over a minute has been seen Service principal credential timeouts are not used Vault will revoke access by deleting the service principal The Application Name for dynamic service principals will be prefixed with vault Similarly the keyId of any passwords added to an existing service principal will begin with ffffff These may be used to search for Vault created credentials using the az tool or Portal Azure debug logs The Azure secret engine plugin supports debug logging which includes additional information about requests and responses from the Azure API To enable the Azure debug logs set the AZURE SDK GO LOGGING environment variable to all on your Vault server shell AZURE SDK GO LOGGING all Help amp support The Azure secrets engine is written as an external Vault plugin and thus exists outside the main Vault repository It is automatically bundled with Vault releases but the code is managed separately Please report issues add feature requests and submit contributions to the vault plugin secrets azure repo repo on GitHub Tutorial Refer to the Azure Secrets Engine vault tutorials secrets management azure secrets tutorial to learn how to use the Azure secrets engine to dynamically generate Azure credentials API The Azure secrets engine has a full HTTP API Please see the Azure secrets engine API docs api for more details api vault api docs secret azure config vault api docs secret azure configure access repo https github com hashicorp vault plugin secrets azure user access admin https learn microsoft com en us azure role based access control built in roles user access administrator |
vault LDAP secrets engine page title LDAP Secrets Engine The LDAP secret engine manages LDAP entry passwords layout docs include x509 sha1 deprecation mdx | ---
layout: docs
page_title: LDAP - Secrets Engine
description: >-
The LDAP secret engine manages LDAP entry passwords.
---
# LDAP secrets engine
@include 'x509-sha1-deprecation.mdx'
The LDAP secrets engine provides management of LDAP credentials as well as dynamic
creation of credentials. It supports integration with implementations of the LDAP
v3 protocol, including OpenLDAP, Active Directory, and IBM Resource Access Control
Facility (RACF).
The secrets engine has three primary features:
- [Static Credentials](/vault/docs/secrets/ldap#static-credentials)
- [Dynamic Credentials](/vault/docs/secrets/ldap#dynamic-credentials)
- [Service Account Check-Out](/vault/docs/secrets/ldap#service-account-check-out)
## Setup
1. Enable the LDAP secret engine:
```sh
$ vault secrets enable ldap
```
By default, the secrets engine will mount at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
2. Configure the credentials that Vault uses to communicate with LDAP
to generate passwords:
```sh
$ vault write ldap/config \
binddn=$USERNAME \
bindpass=$PASSWORD \
url=ldaps://138.91.247.105
```
Note: it's recommended a dedicated entry management account be created specifically for Vault.
3. Rotate the root password so only Vault knows the credentials:
```sh
$ vault write -f ldap/rotate-root
```
Note: it's not possible to retrieve the generated password once rotated by Vault.
It's recommended a dedicated entry management account be created specifically for Vault.
### Schemas
The LDAP Secret Engine supports three different schemas:
- `openldap` (default)
- `racf`
- `ad`
#### OpenLDAP
By default, the LDAP Secret Engine assumes the entry password is stored in `userPassword`.
There are many object classes that provide `userPassword` including for example:
- `organization`
- `organizationalUnit`
- `organizationalRole`
- `inetOrgPerson`
- `person`
- `posixAccount`
#### Resource access control facility (RACF)
For managing IBM's Resource Access Control Facility (RACF) security system, the secret
engine must be configured to use the schema `racf`.
Generated passwords must be 8 characters or less to support RACF. The length of the
password can be configured using a [password policy](/vault/docs/concepts/password-policies):
```bash
$ vault write ldap/config \
binddn=$USERNAME \
bindpass=$PASSWORD \
url=ldaps://138.91.247.105 \
schema=racf \
password_policy=racf_password_policy
```
#### Active directory (AD)
For managing Active Directory instances, the secret engine must be configured to use the
schema `ad`.
```bash
$ vault write ldap/config \
binddn=$USERNAME \
bindpass=$PASSWORD \
url=ldaps://138.91.247.105 \
schema=ad
```
## Static credentials
### Setup
1. Configure a static role that maps a name in Vault to an entry in LDAP.
Password rotation settings will be managed by this role.
```sh
$ vault write ldap/static-role/hashicorp \
dn='uid=hashicorp,ou=users,dc=hashicorp,dc=com' \
username='hashicorp' \
rotation_period="24h"
```
2. Request credentials for the "hashicorp" role:
```sh
$ vault read ldap/static-cred/hashicorp
```
### Password rotation
Passwords can be managed in two ways:
- automatic time based rotation
- manual rotation
### Auto password rotation
Passwords will automatically be rotated based on the `rotation_period` configured
in the static role (minimum of 5 seconds). When requesting credentials for a static
role, the response will include the time before the next rotation (`ttl`).
Auto-rotation is currently only supported for static roles. The `binddn` account used
by Vault should be rotated using the `rotate-root` endpoint to generate a password
only Vault will know.
### Manual rotation
Static roles can be manually rotated using the `rotate-role` endpoint. When manually
rotated the rotation period will start over.
### Deleting static roles
Passwords are not rotated upon deletion of a static role. The password should be manually
rotated prior to deleting the role or revoking access to the static role.
## Dynamic credentials
### Setup
Dynamic credentials can be configured by calling the `/role/:role_name` endpoint:
```bash
$ vault write ldap/role/dynamic-role \
creation_ldif=@/path/to/creation.ldif \
deletion_ldif=@/path/to/deletion.ldif \
rollback_ldif=@/path/to/rollback.ldif \
default_ttl=1h \
max_ttl=24h
```
-> Note: The `rollback_ldif` argument is optional, but recommended. The statements within `rollback_ldif` will be
executed if the creation fails for any reason. This ensures any entities are removed in the event of a failure.
To generate credentials:
```bash
$ vault read ldap/creds/dynamic-role
Key Value
--- -----
lease_id ldap/creds/dynamic-role/HFgd6uKaDomVMvJpYbn9q4q5
lease_duration 1h
lease_renewable true
distinguished_names [cn=v_token_dynamic-role_FfH2i1c4dO_1611952635,ou=users,dc=learn,dc=example]
password xWMjkIFMerYttEbzfnBVZvhRQGmhpAA0yeTya8fdmDB3LXDzGrjNEPV2bCPE9CW6
username v_token_testrole_FfH2i1c4dO_1611952635
```
The `distinguished_names` field is an array of DNs that are created from the `creation_ldif` statements. If more than
one LDIF entry is included, the DN from each statement will be included in this field. Each entry in this field
corresponds to a single LDIF statement. No de-duplication occurs and order is maintained.
### LDIF entries
User account management is provided through LDIF entries. The LDIF entries may be a base64-encoded version of the
LDIF string. The string will be parsed and validated to ensure that it adheres to LDIF syntax. A good reference
for proper LDIF syntax can be found [here](https://ldap.com/ldif-the-ldap-data-interchange-format/).
Some important things to remember when crafting your LDIF entries:
- There should not be any trailing spaces on any line, including empty lines
- Each `modify` block needs to be preceded with an empty line
- Multiple modifications for a `dn` can be defined in a single `modify` block. Each modification needs to close
with a single dash (`-`)
### Active directory (AD)
<Note>
Windows Servers hosting Active Directory include a
`lifetime period of an old password` configuration setting that lets clients
authenticate with old passwords for a specified amount of time.
For more information, refer to the
[NTLM network authentication behavior](https://learn.microsoft.com/en-us/troubleshoot/windows-server/windows-security new-setting-modifies-ntlm-network-authentication)
guide by Microsoft.
</Note>
For Active Directory, there are a few additional details that are important to remember:
To create a user programmatically in AD, you first `add` a user object and then `modify` that user to provide a
password and enable the account.
- Passwords in AD are set using the `unicodePwd` field. This must be proceeded by two (2) colons (`::`).
- When setting a password programmatically in AD, the following criteria must be met:
- The password must be enclosed in double quotes (`" "`)
- The password must be in [`UTF16LE` format](https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-adts/6e803168-f140-4d23-b2d3-c3a8ab5917d2)
- The password must be `base64`-encoded
- Additional details can be found [here](https://docs.microsoft.com/en-us/troubleshoot/windows-server/identity/set-user-password-with-ldifde)
- Once a user's password has been set, it can be enabled. AD uses the `userAccountControl` field for this purpose:
- To enable the account, set `userAccountControl` to `512`
- You will likely also want to disable AD's password expiration for this dynamic user account. The
`userAccountControl` value for this is: `65536`
- `userAccountControl` flags are cumulative, so to set both of the above two flags, add up the two values
(`512 + 65536 = 66048`): set `userAccountControl` to `66048`
- See [here](https://docs.microsoft.com/en-us/troubleshoot/windows-server/identity/useraccountcontrol-manipulate-account-properties#property-flag-descriptions)
for details on `userAccountControl` flags
`sAMAccountName` is a common field when working with AD users. It is used to provide compatibility with legacy
Windows NT systems and has a limit of 20 characters. Keep this in mind when defining your `username_template`.
See [here](https://docs.microsoft.com/en-us/windows/win32/adschema/a-samaccountname) for additional details.
Since the default `username_template` is longer than 20 characters which follows the template of `v____`, we recommend customising the `username_template` on the role configuration to generate accounts with names less than 20 characters. Please refer to the [username templating document](/vault/docs/concepts/username-templating) for more information.
With regard to adding dynamic users to groups, AD doesn't let you directly modify a user's `memberOf` attribute.
The `member` attribute of a group and `memberOf` attribute of a user are
[linked attributes](https://docs.microsoft.com/en-us/windows/win32/ad/linked-attributes). Linked attributes are
forward link/back link pairs, with the forward link able to be modified. In the case of AD group membership, the
group's `member` attribute is the forward link. In order to add a newly-created dynamic user to a group, we also
need to issue a `modify` request to the desired group and update the group membership with the new user.
#### Active directory LDIF example
The various `*_ldif` parameters are templates that use the [go template](https://golang.org/pkg/text/template/)
language. A complete LDIF example for creating an Active Directory user account is provided here for reference:
```ldif
dn: CN=,OU=HashiVault,DC=adtesting,DC=lab
changetype: add
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
userPrincipalName: @adtesting.lab
sAMAccountName:
dn: CN=,OU=HashiVault,DC=adtesting,DC=lab
changetype: modify
replace: unicodePwd
unicodePwd::
-
replace: userAccountControl
userAccountControl: 66048
-
dn: CN=test-group,OU=HashiVault,DC=adtesting,DC=lab
changetype: modify
add: member
member: CN=,OU=HashiVault,DC=adtesting,DC=lab
-
```
## Service account Check-Out
Service account check-out provides a library of service accounts that can be checked out
by a person or by machines. Vault will automatically rotate the password each time a
service account is checked in. Service accounts can be voluntarily checked in, or Vault
will check them in when their lending period (or, "ttl", in Vault's language) ends.
The service account check-out functionality works with various [schemas](/vault/api-docs/secret/ldap#schema),
including OpenLDAP, Active Directory, and RACF. In the following usage example, the secrets
engine is configured to manage a library of service accounts in an Active Directory instance.
First we'll need to enable the LDAP secrets engine and tell it how to securely connect
to an AD server.
```shell-session
$ vault secrets enable ldap
Success! Enabled the ad secrets engine at: ldap/
$ vault write ldap/config \
binddn=$USERNAME \
bindpass=$PASSWORD \
url=ldaps://138.91.247.105 \
userdn='dc=example,dc=com'
```
Our next step is to designate a set of service accounts for check-out.
```shell-session
$ vault write ldap/library/accounting-team \
[email protected],[email protected] \
ttl=10h \
max_ttl=20h \
disable_check_in_enforcement=false
```
In this example, the service account names of `[email protected]` and `[email protected]` have
already been created on the remote AD server. They've been set aside solely for Vault to handle.
The `ttl` is how long each check-out will last before Vault checks in a service account,
rotating its password during check-in. The `max_ttl` is the maximum amount of time it can live
if it's renewed. These default to `24h`, and both use [duration format strings](/vault/docs/concepts/duration-format).
Also by default, a service account must be checked in by the same Vault entity or client token that
checked it out. However, if this behavior causes problems, set `disable_check_in_enforcement=true`.
When a library of service accounts has been created, view their status at any time to see if they're
available or checked out.
```shell-session
$ vault read ldap/library/accounting-team/status
Key Value
--- -----
[email protected] map[available:true]
[email protected] map[available:true]
```
To check out any service account that's available, simply execute:
```shell-session
$ vault write -f ldap/library/accounting-team/check-out
Key Value
--- -----
lease_id ldap/library/accounting-team/check-out/EpuS8cX7uEsDzOwW9kkKOyGW
lease_duration 10h
lease_renewable true
password ?@09AZKh03hBORZPJcTDgLfntlHqxLy29tcQjPVThzuwWAx/Twx4a2ZcRQRqrZ1w
service_account_name [email protected]
```
If the default `ttl` for the check-out is higher than needed, set the check-out to last
for a shorter time by using:
```shell-session
$ vault write ldap/library/accounting-team/check-out ttl=30m
Key Value
--- -----
lease_id ldap/library/accounting-team/check-out/gMonJ2jB6kYs6d3Vw37WFDCY
lease_duration 30m
lease_renewable true
password ?@09AZerLLuJfEMbRqP+3yfQYDSq6laP48TCJRBJaJu/kDKLsq9WxL9szVAvL/E1
service_account_name [email protected]
```
This can be a nice way to say, "Although I _can_ have a check-out for 24 hours, if I
haven't checked it in after 30 minutes, I forgot or I'm a dead instance, so you can just
check it back in."
If no service accounts are available for check-out, Vault will return a 400 Bad Request.
```shell-session
$ vault write -f ldap/library/accounting-team/check-out
Error writing data to ldap/library/accounting-team/check-out: Error making API request.
URL: POST http://localhost:8200/v1/ldap/library/accounting-team/check-out
Code: 400. Errors:
* No service accounts available for check-out.
```
To extend a check-out, renew its lease.
```shell-session
$ vault lease renew ldap/library/accounting-team/check-out/0C2wmeaDmsToVFc0zDiX9cMq
Key Value
--- -----
lease_id ldap/library/accounting-team/check-out/0C2wmeaDmsToVFc0zDiX9cMq
lease_duration 10h
lease_renewable true
```
Renewing a check-out means its current password will live longer, since passwords are rotated
anytime a password is _checked in_ either by a caller, or by Vault because the check-out `ttl`
ends.
To check a service account back in for others to use, call:
```shell-session
$ vault write -f ldap/library/accounting-team/check-in
Key Value
--- -----
check_ins [[email protected]]
```
Most of the time this will just work, but if multiple service accounts are checked out by the same
caller, Vault will need to know which one(s) to check in.
```shell-session
$ vault write ldap/library/accounting-team/check-in [email protected]
Key Value
--- -----
check_ins [[email protected]]
```
To perform a check-in, Vault verifies that the caller _should_ be able to check in a given service account.
To do this, Vault looks for either the same [entity ID](/vault/tutorials/auth-methods/identity)
used to check out the service account, or the same client token.
If a caller is unable to check in a service account, or simply doesn't try,
Vault will check it back in automatically when the `ttl` expires. However, if that is too long,
service accounts can be forcibly checked in by a highly privileged user through:
```shell-session
$ vault write -f ldap/library/manage/accounting-team/check-in
Key Value
--- -----
check_ins [[email protected]]
```
Or, alternatively, revoking the secret's lease has the same effect.
```shell-session
$ vault lease revoke ldap/library/accounting-team/check-out/PvBVG0m7pEg2940Cb3Jw3KpJ
All revocation operations queued successfully!
```
## Password generation
This engine previously allowed configuration of the length of the password that is generated
when rotating credentials. This mechanism was deprecated in Vault 1.5 in favor of
[password policies](/vault/docs/concepts/password-policies). This means the `length` field should
no longer be used. The following password policy can be used to mirror the same behavior
that the `length` field provides:
```hcl
length=<length>
rule "charset" {
charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
}
```
## LDAP password policy
The LDAP secret engine does not hash or encrypt passwords prior to modifying
values in LDAP. This behavior can cause plaintext passwords to be stored in LDAP.
To avoid having plaintext passwords stored, the LDAP server should be configured
with an LDAP password policy (ppolicy, not to be confused with a Vault password
policy). A ppolicy can enforce rules such as hashing plaintext passwords by default.
The following is an example of an LDAP password policy to enforce hashing on the
data information tree (DIT) `dc=hashicorp,dc=com`:
```
dn: cn=module{0},cn=config
changetype: modify
add: olcModuleLoad
olcModuleLoad: ppolicy
dn: olcOverlay={2}ppolicy,olcDatabase={1}mdb,cn=config
changetype: add
objectClass: olcPPolicyConfig
objectClass: olcOverlayConfig
olcOverlay: {2}ppolicy
olcPPolicyDefault: cn=default,ou=pwpolicies,dc=hashicorp,dc=com
olcPPolicyForwardUpdates: FALSE
olcPPolicyHashCleartext: TRUE
olcPPolicyUseLockout: TRUE
```
## Hierarchical paths
The LDAP secrets engine lets you define role and set names that contain an
arbitrary number of forward slashes. Names with forward slashes define
hierarchical path structures.
For example, you can configure two static roles with the names `org/secure` and `org/platform/dev`:
```shell-session
$ vault write ldap/static-role/org/secure \
username="user1" \
rotation_period="1h"
Success! Data written to: ldap/static-role/org/secure
$ vault write ldap/static-role/org/platform/dev \
username="user2" \
rotation_period="1h"
Success! Data written to: ldap/static-role/org/platform/dev
```
Names with hierarchical paths let you use the Vault API to query the available
roles at a specific path with arbitrary depth. Names that end with a forward
slash indicate that sub-paths reside under that path.
For example, to list all direct children under the `org/` path:
```shell-session
$ vault list ldap/static-role/org/
Keys
----
platform/
secure
```
The `platform/` key also ends in a forward slash. To list the `platform` sub-keys:
```shell-session
$ vault list ldap/static-role/org/platform
Keys
----
dev
```
You can read and rotate credentials using the same role name and the respective
APIs. For example,
```shell-session
$ vault read ldap/static-cred/org/platform/dev
Key Value
--- -----
dn n/a
last_password a3sQ6OkmXKt2dtx22kAt36YLkkxLsg4RmhMZCLYCBCbvvv67ILROaOokdCaGPEAE
last_vault_rotation 2024-05-03T16:39:27.174164-05:00
password ECf7ZoxfDxGuJEYZrzgzTffSIDI4tx5TojBR9wuEGp8bqUXbl4Kr9eAgPjmizcvg
rotation_period 5m
ttl 4m58s
username user2
```
```shell-session
$ vault write -f ldap/rotate-role/org/platform/dev
```
Since [Vault policies](/vault/docs/concepts/policies) are also path-based,
hierarchical names also let you define policies that map 1-1 to LDAP secrets
engine roles and set paths.
The following Vault API endpoints support hierarchical path handling:
- [Static roles](/vault/api-docs/secret/ldap#static-roles)
- [Static role passwords](/vault/api-docs/secret/ldap#static-role-passwords)
- [Manually rotate static role password](/vault/api-docs/secret/ldap#manually-rotate-static-role-password)
- [Dynamic roles](/vault/api-docs/secret/ldap#dynamic-roles)
- [Dynamic role passwords](/vault/api-docs/secret/ldap#dynamic-role-passwords)
- [Library set management](/vault/api-docs/secret/ldap#library-set-management)
- [Library set status check](/vault/api-docs/secret/ldap#library-set-status-check)
- [Check-Out management](/vault/api-docs/secret/ldap#check-out-management)
- [Check-In management](/vault/api-docs/secret/ldap#check-in-management)
## Tutorial
Refer to the [LDAP Secrets Engine](/vault/tutorials/secrets-management/openldap)
tutorial to learn how to configure and use the LDAP secrets engine.
## API
The LDAP secrets engine has a full HTTP API. Please see the [LDAP secrets engine API docs](/vault/api-docs/secret/ldap)
for more details. | vault | layout docs page title LDAP Secrets Engine description The LDAP secret engine manages LDAP entry passwords LDAP secrets engine include x509 sha1 deprecation mdx The LDAP secrets engine provides management of LDAP credentials as well as dynamic creation of credentials It supports integration with implementations of the LDAP v3 protocol including OpenLDAP Active Directory and IBM Resource Access Control Facility RACF The secrets engine has three primary features Static Credentials vault docs secrets ldap static credentials Dynamic Credentials vault docs secrets ldap dynamic credentials Service Account Check Out vault docs secrets ldap service account check out Setup 1 Enable the LDAP secret engine sh vault secrets enable ldap By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 2 Configure the credentials that Vault uses to communicate with LDAP to generate passwords sh vault write ldap config binddn USERNAME bindpass PASSWORD url ldaps 138 91 247 105 Note it s recommended a dedicated entry management account be created specifically for Vault 3 Rotate the root password so only Vault knows the credentials sh vault write f ldap rotate root Note it s not possible to retrieve the generated password once rotated by Vault It s recommended a dedicated entry management account be created specifically for Vault Schemas The LDAP Secret Engine supports three different schemas openldap default racf ad OpenLDAP By default the LDAP Secret Engine assumes the entry password is stored in userPassword There are many object classes that provide userPassword including for example organization organizationalUnit organizationalRole inetOrgPerson person posixAccount Resource access control facility RACF For managing IBM s Resource Access Control Facility RACF security system the secret engine must be configured to use the schema racf Generated passwords must be 8 characters or less to support RACF The length of the password can be configured using a password policy vault docs concepts password policies bash vault write ldap config binddn USERNAME bindpass PASSWORD url ldaps 138 91 247 105 schema racf password policy racf password policy Active directory AD For managing Active Directory instances the secret engine must be configured to use the schema ad bash vault write ldap config binddn USERNAME bindpass PASSWORD url ldaps 138 91 247 105 schema ad Static credentials Setup 1 Configure a static role that maps a name in Vault to an entry in LDAP Password rotation settings will be managed by this role sh vault write ldap static role hashicorp dn uid hashicorp ou users dc hashicorp dc com username hashicorp rotation period 24h 2 Request credentials for the hashicorp role sh vault read ldap static cred hashicorp Password rotation Passwords can be managed in two ways automatic time based rotation manual rotation Auto password rotation Passwords will automatically be rotated based on the rotation period configured in the static role minimum of 5 seconds When requesting credentials for a static role the response will include the time before the next rotation ttl Auto rotation is currently only supported for static roles The binddn account used by Vault should be rotated using the rotate root endpoint to generate a password only Vault will know Manual rotation Static roles can be manually rotated using the rotate role endpoint When manually rotated the rotation period will start over Deleting static roles Passwords are not rotated upon deletion of a static role The password should be manually rotated prior to deleting the role or revoking access to the static role Dynamic credentials Setup Dynamic credentials can be configured by calling the role role name endpoint bash vault write ldap role dynamic role creation ldif path to creation ldif deletion ldif path to deletion ldif rollback ldif path to rollback ldif default ttl 1h max ttl 24h Note The rollback ldif argument is optional but recommended The statements within rollback ldif will be executed if the creation fails for any reason This ensures any entities are removed in the event of a failure To generate credentials bash vault read ldap creds dynamic role Key Value lease id ldap creds dynamic role HFgd6uKaDomVMvJpYbn9q4q5 lease duration 1h lease renewable true distinguished names cn v token dynamic role FfH2i1c4dO 1611952635 ou users dc learn dc example password xWMjkIFMerYttEbzfnBVZvhRQGmhpAA0yeTya8fdmDB3LXDzGrjNEPV2bCPE9CW6 username v token testrole FfH2i1c4dO 1611952635 The distinguished names field is an array of DNs that are created from the creation ldif statements If more than one LDIF entry is included the DN from each statement will be included in this field Each entry in this field corresponds to a single LDIF statement No de duplication occurs and order is maintained LDIF entries User account management is provided through LDIF entries The LDIF entries may be a base64 encoded version of the LDIF string The string will be parsed and validated to ensure that it adheres to LDIF syntax A good reference for proper LDIF syntax can be found here https ldap com ldif the ldap data interchange format Some important things to remember when crafting your LDIF entries There should not be any trailing spaces on any line including empty lines Each modify block needs to be preceded with an empty line Multiple modifications for a dn can be defined in a single modify block Each modification needs to close with a single dash Active directory AD Note Windows Servers hosting Active Directory include a lifetime period of an old password configuration setting that lets clients authenticate with old passwords for a specified amount of time For more information refer to the NTLM network authentication behavior https learn microsoft com en us troubleshoot windows server windows security new setting modifies ntlm network authentication guide by Microsoft Note For Active Directory there are a few additional details that are important to remember To create a user programmatically in AD you first add a user object and then modify that user to provide a password and enable the account Passwords in AD are set using the unicodePwd field This must be proceeded by two 2 colons When setting a password programmatically in AD the following criteria must be met The password must be enclosed in double quotes The password must be in UTF16LE format https docs microsoft com en us openspecs windows protocols ms adts 6e803168 f140 4d23 b2d3 c3a8ab5917d2 The password must be base64 encoded Additional details can be found here https docs microsoft com en us troubleshoot windows server identity set user password with ldifde Once a user s password has been set it can be enabled AD uses the userAccountControl field for this purpose To enable the account set userAccountControl to 512 You will likely also want to disable AD s password expiration for this dynamic user account The userAccountControl value for this is 65536 userAccountControl flags are cumulative so to set both of the above two flags add up the two values 512 65536 66048 set userAccountControl to 66048 See here https docs microsoft com en us troubleshoot windows server identity useraccountcontrol manipulate account properties property flag descriptions for details on userAccountControl flags sAMAccountName is a common field when working with AD users It is used to provide compatibility with legacy Windows NT systems and has a limit of 20 characters Keep this in mind when defining your username template See here https docs microsoft com en us windows win32 adschema a samaccountname for additional details Since the default username template is longer than 20 characters which follows the template of v we recommend customising the username template on the role configuration to generate accounts with names less than 20 characters Please refer to the username templating document vault docs concepts username templating for more information With regard to adding dynamic users to groups AD doesn t let you directly modify a user s memberOf attribute The member attribute of a group and memberOf attribute of a user are linked attributes https docs microsoft com en us windows win32 ad linked attributes Linked attributes are forward link back link pairs with the forward link able to be modified In the case of AD group membership the group s member attribute is the forward link In order to add a newly created dynamic user to a group we also need to issue a modify request to the desired group and update the group membership with the new user Active directory LDIF example The various ldif parameters are templates that use the go template https golang org pkg text template language A complete LDIF example for creating an Active Directory user account is provided here for reference ldif dn CN OU HashiVault DC adtesting DC lab changetype add objectClass top objectClass person objectClass organizationalPerson objectClass user userPrincipalName adtesting lab sAMAccountName dn CN OU HashiVault DC adtesting DC lab changetype modify replace unicodePwd unicodePwd replace userAccountControl userAccountControl 66048 dn CN test group OU HashiVault DC adtesting DC lab changetype modify add member member CN OU HashiVault DC adtesting DC lab Service account Check Out Service account check out provides a library of service accounts that can be checked out by a person or by machines Vault will automatically rotate the password each time a service account is checked in Service accounts can be voluntarily checked in or Vault will check them in when their lending period or ttl in Vault s language ends The service account check out functionality works with various schemas vault api docs secret ldap schema including OpenLDAP Active Directory and RACF In the following usage example the secrets engine is configured to manage a library of service accounts in an Active Directory instance First we ll need to enable the LDAP secrets engine and tell it how to securely connect to an AD server shell session vault secrets enable ldap Success Enabled the ad secrets engine at ldap vault write ldap config binddn USERNAME bindpass PASSWORD url ldaps 138 91 247 105 userdn dc example dc com Our next step is to designate a set of service accounts for check out shell session vault write ldap library accounting team service account names fizz example com buzz example com ttl 10h max ttl 20h disable check in enforcement false In this example the service account names of fizz example com and buzz example com have already been created on the remote AD server They ve been set aside solely for Vault to handle The ttl is how long each check out will last before Vault checks in a service account rotating its password during check in The max ttl is the maximum amount of time it can live if it s renewed These default to 24h and both use duration format strings vault docs concepts duration format Also by default a service account must be checked in by the same Vault entity or client token that checked it out However if this behavior causes problems set disable check in enforcement true When a library of service accounts has been created view their status at any time to see if they re available or checked out shell session vault read ldap library accounting team status Key Value buzz example com map available true fizz example com map available true To check out any service account that s available simply execute shell session vault write f ldap library accounting team check out Key Value lease id ldap library accounting team check out EpuS8cX7uEsDzOwW9kkKOyGW lease duration 10h lease renewable true password 09AZKh03hBORZPJcTDgLfntlHqxLy29tcQjPVThzuwWAx Twx4a2ZcRQRqrZ1w service account name fizz example com If the default ttl for the check out is higher than needed set the check out to last for a shorter time by using shell session vault write ldap library accounting team check out ttl 30m Key Value lease id ldap library accounting team check out gMonJ2jB6kYs6d3Vw37WFDCY lease duration 30m lease renewable true password 09AZerLLuJfEMbRqP 3yfQYDSq6laP48TCJRBJaJu kDKLsq9WxL9szVAvL E1 service account name buzz example com This can be a nice way to say Although I can have a check out for 24 hours if I haven t checked it in after 30 minutes I forgot or I m a dead instance so you can just check it back in If no service accounts are available for check out Vault will return a 400 Bad Request shell session vault write f ldap library accounting team check out Error writing data to ldap library accounting team check out Error making API request URL POST http localhost 8200 v1 ldap library accounting team check out Code 400 Errors No service accounts available for check out To extend a check out renew its lease shell session vault lease renew ldap library accounting team check out 0C2wmeaDmsToVFc0zDiX9cMq Key Value lease id ldap library accounting team check out 0C2wmeaDmsToVFc0zDiX9cMq lease duration 10h lease renewable true Renewing a check out means its current password will live longer since passwords are rotated anytime a password is checked in either by a caller or by Vault because the check out ttl ends To check a service account back in for others to use call shell session vault write f ldap library accounting team check in Key Value check ins fizz example com Most of the time this will just work but if multiple service accounts are checked out by the same caller Vault will need to know which one s to check in shell session vault write ldap library accounting team check in service account names fizz example com Key Value check ins fizz example com To perform a check in Vault verifies that the caller should be able to check in a given service account To do this Vault looks for either the same entity ID vault tutorials auth methods identity used to check out the service account or the same client token If a caller is unable to check in a service account or simply doesn t try Vault will check it back in automatically when the ttl expires However if that is too long service accounts can be forcibly checked in by a highly privileged user through shell session vault write f ldap library manage accounting team check in Key Value check ins fizz example com Or alternatively revoking the secret s lease has the same effect shell session vault lease revoke ldap library accounting team check out PvBVG0m7pEg2940Cb3Jw3KpJ All revocation operations queued successfully Password generation This engine previously allowed configuration of the length of the password that is generated when rotating credentials This mechanism was deprecated in Vault 1 5 in favor of password policies vault docs concepts password policies This means the length field should no longer be used The following password policy can be used to mirror the same behavior that the length field provides hcl length length rule charset charset abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789 LDAP password policy The LDAP secret engine does not hash or encrypt passwords prior to modifying values in LDAP This behavior can cause plaintext passwords to be stored in LDAP To avoid having plaintext passwords stored the LDAP server should be configured with an LDAP password policy ppolicy not to be confused with a Vault password policy A ppolicy can enforce rules such as hashing plaintext passwords by default The following is an example of an LDAP password policy to enforce hashing on the data information tree DIT dc hashicorp dc com dn cn module 0 cn config changetype modify add olcModuleLoad olcModuleLoad ppolicy dn olcOverlay 2 ppolicy olcDatabase 1 mdb cn config changetype add objectClass olcPPolicyConfig objectClass olcOverlayConfig olcOverlay 2 ppolicy olcPPolicyDefault cn default ou pwpolicies dc hashicorp dc com olcPPolicyForwardUpdates FALSE olcPPolicyHashCleartext TRUE olcPPolicyUseLockout TRUE Hierarchical paths The LDAP secrets engine lets you define role and set names that contain an arbitrary number of forward slashes Names with forward slashes define hierarchical path structures For example you can configure two static roles with the names org secure and org platform dev shell session vault write ldap static role org secure username user1 rotation period 1h Success Data written to ldap static role org secure vault write ldap static role org platform dev username user2 rotation period 1h Success Data written to ldap static role org platform dev Names with hierarchical paths let you use the Vault API to query the available roles at a specific path with arbitrary depth Names that end with a forward slash indicate that sub paths reside under that path For example to list all direct children under the org path shell session vault list ldap static role org Keys platform secure The platform key also ends in a forward slash To list the platform sub keys shell session vault list ldap static role org platform Keys dev You can read and rotate credentials using the same role name and the respective APIs For example shell session vault read ldap static cred org platform dev Key Value dn n a last password a3sQ6OkmXKt2dtx22kAt36YLkkxLsg4RmhMZCLYCBCbvvv67ILROaOokdCaGPEAE last vault rotation 2024 05 03T16 39 27 174164 05 00 password ECf7ZoxfDxGuJEYZrzgzTffSIDI4tx5TojBR9wuEGp8bqUXbl4Kr9eAgPjmizcvg rotation period 5m ttl 4m58s username user2 shell session vault write f ldap rotate role org platform dev Since Vault policies vault docs concepts policies are also path based hierarchical names also let you define policies that map 1 1 to LDAP secrets engine roles and set paths The following Vault API endpoints support hierarchical path handling Static roles vault api docs secret ldap static roles Static role passwords vault api docs secret ldap static role passwords Manually rotate static role password vault api docs secret ldap manually rotate static role password Dynamic roles vault api docs secret ldap dynamic roles Dynamic role passwords vault api docs secret ldap dynamic role passwords Library set management vault api docs secret ldap library set management Library set status check vault api docs secret ldap library set status check Check Out management vault api docs secret ldap check out management Check In management vault api docs secret ldap check in management Tutorial Refer to the LDAP Secrets Engine vault tutorials secrets management openldap tutorial to learn how to configure and use the LDAP secrets engine API The LDAP secrets engine has a full HTTP API Please see the LDAP secrets engine API docs vault api docs secret ldap for more details |
vault plugin generates database credentials dynamically based on configured roles layout docs Oracle is one of the supported plugins for the database secrets engine This page title Oracle database secrets engines Oracle database secrets engine for the Oracle database | ---
layout: docs
page_title: Oracle - database - secrets engines
description: |-
Oracle is one of the supported plugins for the database secrets engine. This
plugin generates database credentials dynamically based on configured roles
for the Oracle database.
---
# Oracle database secrets engine
-> The Oracle database plugin is now available for use with the database secrets engine for HCP Vault Dedicated on AWS.
The plugin configuration (including installation of the Oracle Instant Client library) is managed
by HCP. Refer to the HCP Vault Dedicated tab for more information.
This secrets engine is a part of the database secrets engine. If you have not read the
[database backend](/vault/docs/secrets/databases) page, please do so now as it explains how to set up the database backend and
gives an overview of how the engine functions.
Oracle is one of the supported plugins for the database secrets engine. It is capable of dynamically generating
credentials based on configured roles for Oracle databases. It also supports [static roles](/vault/docs/secrets/databases#static-roles).
## Capabilities
<Tabs>
<Tab heading="Vault" group="vault">
~> The Oracle database plugin is not bundled in the core Vault code tree and can be
found at its own git repository here:
[hashicorp/vault-plugin-database-oracle](https://github.com/hashicorp/vault-plugin-database-oracle)
~> This plugin is not compatible with Alpine Linux out of the box.
| Plugin Name | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |
| -------------------------------------------------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |
| Customizable (see: [Custom Plugins](/vault/docs/secrets/databases/custom)) | Yes | Yes | Yes | Yes (1.7+) |
</Tab>
<Tab heading="HCP Vault Dedicated" group="hcp">
~> The Oracle Database Plugin is managed by the HCP platform. No extra installation steps are required for HCP Vault Dedicated.
| Plugin Name | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |
| -------------------------------------------------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |
| `vault-plugin-database-oracle` | Yes | Yes | Yes | Yes |
</Tab>
</Tabs>
## Setup
<Tabs>
<Tab heading="Vault" group="vault">
The Oracle database plugin is not bundled in the core Vault code tree and can be
found at its own git repository here:
[hashicorp/vault-plugin-database-oracle](https://github.com/hashicorp/vault-plugin-database-oracle)
For linux/amd64, pre-built binaries can be found at [the releases page](https://releases.hashicorp.com/vault-plugin-database-oracle)
Before running the plugin you will need to have the Oracle Instant Client
library installed. These can be downloaded from Oracle. The libraries will need to
be placed in the default library search path or defined in the ld.so.conf configuration files.
The following privileges are needed by the plugin for minimum functionality. Additional privileges may be needed
depending on the SQL configured on the database roles.
```sql
GRANT CREATE USER to vault WITH ADMIN OPTION;
GRANT ALTER USER to vault WITH ADMIN OPTION;
GRANT DROP USER to vault WITH ADMIN OPTION;
GRANT CONNECT to vault WITH ADMIN OPTION;
GRANT CREATE SESSION to vault WITH ADMIN OPTION;
GRANT SELECT on gv_$session to vault;
GRANT SELECT on v_$sql to vault;
GRANT ALTER SYSTEM to vault WITH ADMIN OPTION;
```
~> Vault needs `ALTER SYSTEM` to terminate user sessions when revoking users. This may be
substituted with a stored procedure and granted to the Vault administrator user.
If you are running Vault with [mlock enabled](/vault/docs/configuration#disable_mlock),
you will need to enable ipc_lock capabilities for the plugin binary.
1. Enable the database secrets engine if it is not already enabled:
```shell-session
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Download and register the plugin:
```shell-session
$ vault write sys/plugins/catalog/database/oracle-database-plugin \
sha256="..." \
command=vault-plugin-database-oracle
```
1. Configure Vault with the proper plugin and connection information:
```shell-session
$ vault write database/config/my-oracle-database \
plugin_name=oracle-database-plugin \
connection_url="/@localhost:1521/OraDoc.localhost" \
allowed_roles="my-role" \
username="VAULT_SUPER_USER" \
password="myreallysecurepassword"
```
If Oracle uses SSL, see the [connecting using SSL](/vault/docs/secrets/databases/oracle#connect-using-ssl) example.
If the version of Oracle you are using has a container database, you will need to connect to one of the
pluggable databases rather than the container database in the `connection_url` field.
1. It is highly recommended that you immediately rotate the "root" user's password, see
[Rotate Root Credentials](/vault/api-docs/secret/databases#rotate-root-credentials) for more details.
This will ensure that only Vault is able to access the "root" user that Vault uses to
manipulate dynamic & static credentials.
!> **Use caution:** the root user's password will not be accessible once rotated so it is highly
recommended that you create a user for Vault to use rather than using the actual root user.
1. Configure a role that maps a name in Vault to an SQL statement to execute to
create the database credential:
```shell-session
$ vault write database/roles/my-role \
db_name=my-oracle-database \
creation_statements='CREATE USER IDENTIFIED BY ""; GRANT CONNECT TO ; GRANT CREATE SESSION TO ;' \
default_ttl="1h" \
max_ttl="24h"
```
Note: The `creation_statements` may be specified in a file and interpreted by the Vault CLI using the `@` symbol:
```shell-session
$ vault write database/roles/my-role \
creation_statements=@creation_statements.sql \
...
```
See the [Commands](/vault/docs/commands#files) docs for more details.
### Connect using SSL
If the Oracle server Vault is trying to connect to uses an SSL listener, the database
plugin will require extra configuration using the `connection_url` parameter:
```shell-session
vault write database/config/oracle \
plugin_name=vault-plugin-database-oracle \
connection_url='/@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=<host>)(PORT=<port>))(CONNECT_DATA=(SERVICE_NAME=<service_name>))(SECURITY=(SSL_SERVER_CERT_DN="<cert_dn>")(MY_WALLET_DIRECTORY=<path_to_wallet>)))' \
allowed_roles="my-role" \
username="admin" \
password="password"
```
For example, the SSL server certificate distinguished name and path to the Oracle Wallet
to use for connection and verification could be configured using:
```shell-session
vault write database/config/oracle \
plugin_name=vault-plugin-database-oracle \
connection_url='/@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=hashicorp.com)(PORT=1523))(CONNECT_DATA=(SERVICE_NAME=ORCL))(SECURITY=(SSL_SERVER_CERT_DN="CN=hashicorp.com,OU=TestCA,O=HashiCorp=com")(MY_WALLET_DIRECTORY=/etc/oracle/wallets)))' \
allowed_roles="my-role" \
username="admin" \
password="password"
```
#### Wallet permissions
~> **Note**: The wallets used when connecting via SSL should be available on every Vault
server when using high availability clusters.
The wallet used by Vault should be in a well known location with the proper filesystem permissions. For example, if Vault is running as the `vault` user,
the wallet directory may be setup as follows:
```shell-session
mkdir -p /etc/vault/wallets
cp cwallet.sso /etc/vault/wallets/cwallet.sso
chown -R vault:vault /etc/vault
chmod 600 /etc/vault/wallets/cwallet.sso
```
### Using TNS names
~> **Note**: The `tnsnames.ora` file and environment variable used when connecting via SSL should
be available on every Vault server when using high availability clusters.
Vault can optionally use TNS names in the connection string when connecting to Oracle databases using a `tnsnames.ora` file. An example
of a `tnsnames.ora` file may look like the following:
```shell-session
AWSEAST=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCPS)(HOST = hashicorp.us-east-1.rds.amazonaws.com)(PORT = 1523))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = ORCL)
)
(SECURITY =
(SSL_SERVER_CERT_DN = "CN=hashicorp.rds.amazonaws.com/OU=RDS/O=Amazon.com/L=Seattle/ST=Washington/C=US")
(MY_WALLET_DIRECTORY = /etc/oracle/wallet/east)
)
)
AWSWEST=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCPS)(HOST = hashicorp.us-west-1.rds.amazonaws.com)(PORT = 1523))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = ORCL)
)
(SECURITY =
(SSL_SERVER_CERT_DN = "CN=hashicorp.rds.amazonaws.com/OU=RDS/O=Amazon.com/L=Seattle/ST=Washington/C=US")
(MY_WALLET_DIRECTORY = /etc/oracle/wallet/west)
)
)
```
To configure Vault to use TNS names, set the following environment variable on the Vault server:
```shell-session
TNS_ADMIN=/path/to/tnsnames/directory
```
~> **Note**: If Vault returns a "could not open file" error, double check that
the `TNS_ADMIN` environment variable is available to the Vault server.
Use the alias in the `connection_url` parameter on the database configuration:
```
vault write database/config/oracle-east \
plugin_name=vault-plugin-database-oracle \
connection_url="/@AWSEAST" \
allowed_roles="my-role" \
username="VAULT_SUPER_USER" \
password="myreallysecurepassword"
vault write database/config/oracle-west \
plugin_name=vault-plugin-database-oracle \
connection_url="/@AWSWEST" \
allowed_roles="my-role" \
username="VAULT_SUPER_USER" \
password="myreallysecurepassword"
```
</Tab>
<Tab heading="HCP Vault Dedicated" group="hcp">
1. Enable the database secrets engine if it is not already enabled:
```shell-session
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure Vault with the proper plugin and connection information. The `plugin-name` must be set to
`vault-plugin-database-oracle`.
~> **Note:** Replace `your-oracle-host` in the `connection_url` parameter with the hostname of your Oracle server.
```shell-session
$ vault write database/config/my-oracle-database \
plugin_name=vault-plugin-database-oracle \
connection_url="/@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=your-oracle-host)(PORT=1521))(CONNECT_DATA=(SID=ORCL)))" \
allowed_roles="my-role" \
username="VAULT_SUPER_USER" \
password="myreallysecurepassword"
```
HCP Vault Dedicated currently supports SSL connections for Oracle on Amazon Web Services (AWS) Relational Database Service (RDS).
If Oracle is deployed on AWS RDS, and uses SSL, see the [connecting with HCP Vault Dedicated using SSL](#connect-with-hcp-vault-using-ssl) example.
If the version of Oracle you are using has a container database, you will need to connect to one of the
pluggable databases rather than the container database in the `connection_url` field.
1. It is highly recommended that you immediately rotate the "root" user's password, see
[Rotate Root Credentials](/vault/api-docs/secret/databases#rotate-root-credentials) for more details.
This will ensure that only Vault is able to access the "root" user that Vault uses to
manipulate dynamic & static credentials.
!> **Use caution:** the "root" user's password will not be accessible once rotated so it is highly
recommended that you create a user for Vault to use rather than the actual `root` user.
1. Configure a role that maps a name in Vault to a SQL statement to execute and
create the database credential:
```shell-session
$ vault write database/roles/my-role \
db_name=my-oracle-database \
creation_statements='CREATE USER IDENTIFIED BY ""; GRANT CONNECT TO ; GRANT CREATE SESSION TO ;' \
default_ttl="1h" \
max_ttl="24h"
```
Note: The `creation_statements` may be specified in a file and interpreted by the Vault CLI using the `@` symbol:
```shell-session
$ vault write database/roles/my-role \
creation_statements=@creation_statements.sql \
...
```
See the [Commands](/vault/docs/commands#files) docs for more details.
### Connect with HCP Vault Dedicated using SSL
Before using SSL with Oracle RDS, you must configure a option group with SSL and set the following:
- `SQLNET.SSL_VERSION` to `1.2`
- `SQLNET.CIPHER_SUITE` to one of `TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384`, `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`, `TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`, `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256`, `TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA`, or `TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA`
If the AWS RDS Oracle instance Vault is trying to connect to uses an SSL listener, the database
plugin will require extra configuration using the `connection_url` parameter:
```shell-session
$ vault write database/config/oracle \
plugin_name=vault-plugin-database-oracle \
connection_url='/@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=<host>)(PORT=<port>))(CONNECT_DATA=(SERVICE_NAME=<service_name>))(SECURITY=(SSL_SERVER_CERT_DN="<cert_dn>")(MY_WALLET_DIRECTORY=<path_to_wallet>)))' \
allowed_roles="my-role" \
username="VAULT_SUPER_USER" \
password="myreallysecurepassword"
```
For example, the SSL server certificate distinguished name for AWS RDS and path to the Oracle Wallet
to use for connection and verification could be configured using:
- Wallet location and permissions are managed by the HCP platform. The wallet is available at `/etc/vault.d/plugin/oracle/ssl_wallet`.
- The distinguished name for the current AWS RDS CA is in the format `SECURITY=(SSL_SERVER_CERT_DN="C=US,ST=Washington,L=Seattle,O=Amazon.com,OU=RDS,CN=your-rds-endpoint-url")`.
- A listener on port `2484` is enabled by adding `SSL` to a RDS option group and applying the option group with SSL to your Oracle RDS instance.
- Replace `your-rds-endpoint-url` with the endpoint for your RDS instance in the `HOST` and `DN` parameters.
-> **Note:** For more information on using SSL/TLS with AWS RDS, review the [Using SSL/TLS to encrypt a connetion to a DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html) AWS documentation.
```shell-session
$ vault write database/config/my-oracle-database \
plugin_name=vault-plugin-database-oracle \
connection_url="/@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=your-rds-endpoint-url)(PORT=2484))(CONNECT_DATA=(SERVICE_NAME=ORCL))(SECURITY=(SSL_SERVER_CERT_DN="C=US,ST=Washington,L=Seattle,O=Amazon.com,OU=RDS,CN=your-rds-endpoint-url")(MY_WALLET_DIRECTORY=/etc/vault.d/plugin/oracle/ssl_wallet)))" \
allowed_roles="my-role" \
username="admin" \
password="password"
```
~> **Using TNS names:** `tnsnames.ora` configuration is not currently available with HCP Vault Dedicated.
</Tab>
</Tabs>
## Usage
### Dynamic credentials
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can generate credentials.
1. Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```text
$ vault read database/creds/my-role
Key Value
--- -----
lease_id database/creds/my-role/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6
lease_duration 1h
lease_renewable true
password yRUSyd-vPYDg5NkU9kDg
username V_VAULTUSE_MY_ROLE_SJJUK3Q8W3BKAYAN8S62_1602543009
```
## API
The full list of configurable options can be seen in the [Oracle database plugin
API](/vault/api-docs/secret/databases/oracle) page.
For more information on the database secrets engine's HTTP API please see the
[Database secrets engine API](/vault/api-docs/secret/databases) page. | vault | layout docs page title Oracle database secrets engines description Oracle is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for the Oracle database Oracle database secrets engine The Oracle database plugin is now available for use with the database secrets engine for HCP Vault Dedicated on AWS The plugin configuration including installation of the Oracle Instant Client library is managed by HCP Refer to the HCP Vault Dedicated tab for more information This secrets engine is a part of the database secrets engine If you have not read the database backend vault docs secrets databases page please do so now as it explains how to set up the database backend and gives an overview of how the engine functions Oracle is one of the supported plugins for the database secrets engine It is capable of dynamically generating credentials based on configured roles for Oracle databases It also supports static roles vault docs secrets databases static roles Capabilities Tabs Tab heading Vault group vault The Oracle database plugin is not bundled in the core Vault code tree and can be found at its own git repository here hashicorp vault plugin database oracle https github com hashicorp vault plugin database oracle This plugin is not compatible with Alpine Linux out of the box Plugin Name Root Credential Rotation Dynamic Roles Static Roles Username Customization Customizable see Custom Plugins vault docs secrets databases custom Yes Yes Yes Yes 1 7 Tab Tab heading HCP Vault Dedicated group hcp The Oracle Database Plugin is managed by the HCP platform No extra installation steps are required for HCP Vault Dedicated Plugin Name Root Credential Rotation Dynamic Roles Static Roles Username Customization vault plugin database oracle Yes Yes Yes Yes Tab Tabs Setup Tabs Tab heading Vault group vault The Oracle database plugin is not bundled in the core Vault code tree and can be found at its own git repository here hashicorp vault plugin database oracle https github com hashicorp vault plugin database oracle For linux amd64 pre built binaries can be found at the releases page https releases hashicorp com vault plugin database oracle Before running the plugin you will need to have the Oracle Instant Client library installed These can be downloaded from Oracle The libraries will need to be placed in the default library search path or defined in the ld so conf configuration files The following privileges are needed by the plugin for minimum functionality Additional privileges may be needed depending on the SQL configured on the database roles sql GRANT CREATE USER to vault WITH ADMIN OPTION GRANT ALTER USER to vault WITH ADMIN OPTION GRANT DROP USER to vault WITH ADMIN OPTION GRANT CONNECT to vault WITH ADMIN OPTION GRANT CREATE SESSION to vault WITH ADMIN OPTION GRANT SELECT on gv session to vault GRANT SELECT on v sql to vault GRANT ALTER SYSTEM to vault WITH ADMIN OPTION Vault needs ALTER SYSTEM to terminate user sessions when revoking users This may be substituted with a stored procedure and granted to the Vault administrator user If you are running Vault with mlock enabled vault docs configuration disable mlock you will need to enable ipc lock capabilities for the plugin binary 1 Enable the database secrets engine if it is not already enabled shell session vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Download and register the plugin shell session vault write sys plugins catalog database oracle database plugin sha256 command vault plugin database oracle 1 Configure Vault with the proper plugin and connection information shell session vault write database config my oracle database plugin name oracle database plugin connection url localhost 1521 OraDoc localhost allowed roles my role username VAULT SUPER USER password myreallysecurepassword If Oracle uses SSL see the connecting using SSL vault docs secrets databases oracle connect using ssl example If the version of Oracle you are using has a container database you will need to connect to one of the pluggable databases rather than the container database in the connection url field 1 It is highly recommended that you immediately rotate the root user s password see Rotate Root Credentials vault api docs secret databases rotate root credentials for more details This will ensure that only Vault is able to access the root user that Vault uses to manipulate dynamic static credentials Use caution the root user s password will not be accessible once rotated so it is highly recommended that you create a user for Vault to use rather than using the actual root user 1 Configure a role that maps a name in Vault to an SQL statement to execute to create the database credential shell session vault write database roles my role db name my oracle database creation statements CREATE USER IDENTIFIED BY GRANT CONNECT TO GRANT CREATE SESSION TO default ttl 1h max ttl 24h Note The creation statements may be specified in a file and interpreted by the Vault CLI using the symbol shell session vault write database roles my role creation statements creation statements sql See the Commands vault docs commands files docs for more details Connect using SSL If the Oracle server Vault is trying to connect to uses an SSL listener the database plugin will require extra configuration using the connection url parameter shell session vault write database config oracle plugin name vault plugin database oracle connection url DESCRIPTION ADDRESS PROTOCOL tcps HOST host PORT port CONNECT DATA SERVICE NAME service name SECURITY SSL SERVER CERT DN cert dn MY WALLET DIRECTORY path to wallet allowed roles my role username admin password password For example the SSL server certificate distinguished name and path to the Oracle Wallet to use for connection and verification could be configured using shell session vault write database config oracle plugin name vault plugin database oracle connection url DESCRIPTION ADDRESS PROTOCOL tcps HOST hashicorp com PORT 1523 CONNECT DATA SERVICE NAME ORCL SECURITY SSL SERVER CERT DN CN hashicorp com OU TestCA O HashiCorp com MY WALLET DIRECTORY etc oracle wallets allowed roles my role username admin password password Wallet permissions Note The wallets used when connecting via SSL should be available on every Vault server when using high availability clusters The wallet used by Vault should be in a well known location with the proper filesystem permissions For example if Vault is running as the vault user the wallet directory may be setup as follows shell session mkdir p etc vault wallets cp cwallet sso etc vault wallets cwallet sso chown R vault vault etc vault chmod 600 etc vault wallets cwallet sso Using TNS names Note The tnsnames ora file and environment variable used when connecting via SSL should be available on every Vault server when using high availability clusters Vault can optionally use TNS names in the connection string when connecting to Oracle databases using a tnsnames ora file An example of a tnsnames ora file may look like the following shell session AWSEAST DESCRIPTION ADDRESS PROTOCOL TCPS HOST hashicorp us east 1 rds amazonaws com PORT 1523 CONNECT DATA SERVER DEDICATED SID ORCL SECURITY SSL SERVER CERT DN CN hashicorp rds amazonaws com OU RDS O Amazon com L Seattle ST Washington C US MY WALLET DIRECTORY etc oracle wallet east AWSWEST DESCRIPTION ADDRESS PROTOCOL TCPS HOST hashicorp us west 1 rds amazonaws com PORT 1523 CONNECT DATA SERVER DEDICATED SID ORCL SECURITY SSL SERVER CERT DN CN hashicorp rds amazonaws com OU RDS O Amazon com L Seattle ST Washington C US MY WALLET DIRECTORY etc oracle wallet west To configure Vault to use TNS names set the following environment variable on the Vault server shell session TNS ADMIN path to tnsnames directory Note If Vault returns a could not open file error double check that the TNS ADMIN environment variable is available to the Vault server Use the alias in the connection url parameter on the database configuration vault write database config oracle east plugin name vault plugin database oracle connection url AWSEAST allowed roles my role username VAULT SUPER USER password myreallysecurepassword vault write database config oracle west plugin name vault plugin database oracle connection url AWSWEST allowed roles my role username VAULT SUPER USER password myreallysecurepassword Tab Tab heading HCP Vault Dedicated group hcp 1 Enable the database secrets engine if it is not already enabled shell session vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure Vault with the proper plugin and connection information The plugin name must be set to vault plugin database oracle Note Replace your oracle host in the connection url parameter with the hostname of your Oracle server shell session vault write database config my oracle database plugin name vault plugin database oracle connection url DESCRIPTION ADDRESS PROTOCOL TCP HOST your oracle host PORT 1521 CONNECT DATA SID ORCL allowed roles my role username VAULT SUPER USER password myreallysecurepassword HCP Vault Dedicated currently supports SSL connections for Oracle on Amazon Web Services AWS Relational Database Service RDS If Oracle is deployed on AWS RDS and uses SSL see the connecting with HCP Vault Dedicated using SSL connect with hcp vault using ssl example If the version of Oracle you are using has a container database you will need to connect to one of the pluggable databases rather than the container database in the connection url field 1 It is highly recommended that you immediately rotate the root user s password see Rotate Root Credentials vault api docs secret databases rotate root credentials for more details This will ensure that only Vault is able to access the root user that Vault uses to manipulate dynamic static credentials Use caution the root user s password will not be accessible once rotated so it is highly recommended that you create a user for Vault to use rather than the actual root user 1 Configure a role that maps a name in Vault to a SQL statement to execute and create the database credential shell session vault write database roles my role db name my oracle database creation statements CREATE USER IDENTIFIED BY GRANT CONNECT TO GRANT CREATE SESSION TO default ttl 1h max ttl 24h Note The creation statements may be specified in a file and interpreted by the Vault CLI using the symbol shell session vault write database roles my role creation statements creation statements sql See the Commands vault docs commands files docs for more details Connect with HCP Vault Dedicated using SSL Before using SSL with Oracle RDS you must configure a option group with SSL and set the following SQLNET SSL VERSION to 1 2 SQLNET CIPHER SUITE to one of TLS ECDHE RSA WITH AES 256 CBC SHA384 TLS ECDHE RSA WITH AES 256 GCM SHA384 TLS ECDHE RSA WITH AES 128 GCM SHA256 TLS ECDHE RSA WITH AES 128 CBC SHA256 TLS ECDHE RSA WITH AES 256 CBC SHA or TLS ECDHE RSA WITH AES 128 CBC SHA If the AWS RDS Oracle instance Vault is trying to connect to uses an SSL listener the database plugin will require extra configuration using the connection url parameter shell session vault write database config oracle plugin name vault plugin database oracle connection url DESCRIPTION ADDRESS PROTOCOL tcps HOST host PORT port CONNECT DATA SERVICE NAME service name SECURITY SSL SERVER CERT DN cert dn MY WALLET DIRECTORY path to wallet allowed roles my role username VAULT SUPER USER password myreallysecurepassword For example the SSL server certificate distinguished name for AWS RDS and path to the Oracle Wallet to use for connection and verification could be configured using Wallet location and permissions are managed by the HCP platform The wallet is available at etc vault d plugin oracle ssl wallet The distinguished name for the current AWS RDS CA is in the format SECURITY SSL SERVER CERT DN C US ST Washington L Seattle O Amazon com OU RDS CN your rds endpoint url A listener on port 2484 is enabled by adding SSL to a RDS option group and applying the option group with SSL to your Oracle RDS instance Replace your rds endpoint url with the endpoint for your RDS instance in the HOST and DN parameters Note For more information on using SSL TLS with AWS RDS review the Using SSL TLS to encrypt a connetion to a DB instance https docs aws amazon com AmazonRDS latest UserGuide UsingWithRDS SSL html AWS documentation shell session vault write database config my oracle database plugin name vault plugin database oracle connection url DESCRIPTION ADDRESS PROTOCOL tcps HOST your rds endpoint url PORT 2484 CONNECT DATA SERVICE NAME ORCL SECURITY SSL SERVER CERT DN C US ST Washington L Seattle O Amazon com OU RDS CN your rds endpoint url MY WALLET DIRECTORY etc vault d plugin oracle ssl wallet allowed roles my role username admin password password Using TNS names tnsnames ora configuration is not currently available with HCP Vault Dedicated Tab Tabs Usage Dynamic credentials After the secrets engine is configured and a user machine has a Vault token with the proper permission it can generate credentials 1 Generate a new credential by reading from the creds endpoint with the name of the role text vault read database creds my role Key Value lease id database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6 lease duration 1h lease renewable true password yRUSyd vPYDg5NkU9kDg username V VAULTUSE MY ROLE SJJUK3Q8W3BKAYAN8S62 1602543009 API The full list of configurable options can be seen in the Oracle database plugin API vault api docs secret databases oracle page For more information on the database secrets engine s HTTP API please see the Database secrets engine API vault api docs secret databases page |
vault plugin generates database credentials dynamically based on configured roles MongoDB Atlas is one of the supported plugins for the database secrets engine This layout docs MongoDB Atlas database secrets engine page title MongoDB Atlas Database Secrets Engines for MongoDB Atlas databases | ---
layout: docs
page_title: MongoDB Atlas- Database - Secrets Engines
description: |-
MongoDB Atlas is one of the supported plugins for the database secrets engine. This
plugin generates database credentials dynamically based on configured roles
for MongoDB Atlas databases.
---
# MongoDB Atlas database secrets engine
MongoDB Atlas is one of the supported plugins for the database secrets engine. This
plugin generates database credentials dynamically based on configured roles for
MongoDB Atlas databases. It cannot support rotating the root user's credentials because
it uses a public and private key pair to authenticate.
See the [database secrets engine](/vault/docs/secrets/databases) docs for
more information about setting up the database secrets engine.
<Note>
The information below relates to the MongoDB Altas <b>database plugin</b> for the Vault database secrets engine.
Refer to the <a href="/vault/docs/secrets/mongodbatlas">MongoDB Atlas secrets engine</a> for
information about using the MongoDB Atlas secrets engine for the Vault.
</Note>
## Capabilities
| Plugin Name | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization | Credential Types |
| ------------------------------ | ------------------------ | ------------- | ------------ | ---------------------- | ---------------------------- |
| `mongodbatlas-database-plugin` | No | Yes | Yes | Yes (1.8+) | password, client_certificate |
## Setup
1. Enable the database secrets engine if it is not already enabled:
```shell-session
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure Vault with the proper plugin and connection information:
```shell-session
$ vault write database/config/my-mongodbatlas-database \
plugin_name=mongodbatlas-database-plugin \
allowed_roles="*" \
public_key="jmskfortvf" \
private_key="ea6acbc7-8a30-4a3f-812e-6f869c08bcd1" \
project_id="4f96cad208574fd14aa8dda3a"
```
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permissions, it can generate credentials.
#### Password credentials
1. Configure a role that maps a name in Vault to a MongoDB Atlas command that executes and
creates the database user credential:
```shell-session
$ vault write database/roles/my-password-role \
db_name=my-mongodbatlas-database \
creation_statements='{"database_name": "admin","roles": [{"databaseName":"admin","roleName":"atlasAdmin"}]}' \
default_ttl="1h" \
max_ttl="24h"
Success! Data written to: database/roles/my-password-role
```
1. Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```shell-session
$ vault read database/creds/my-password-role
Key Value
--- -----
lease_id database/creds/my-role/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6
lease_duration 1h
lease_renewable true
password FBYwnnh-fwc0quxtKf11
username v-my-password-role-DKbQEg6uRn
```
Each invocation of the command generates a new credential.
MongoDB Atlas database credentials eventually become consistent when the
[MongoDB Atlas Admin API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/)
coordinates with hosted clusters in your Atlas project. You cannot use the
credentials successfully until the consistency process completes.
If you plan to use MongoDB Atlas credentials in a pipeline, you may need to add
a time delay or secondary process to account for the time required to establish consistency.
#### Client certificate credentials
1. Configure a role that maps a name in Vault to a MongoDB Atlas command that executes and
creates the X509 type database user credential:
```shell-session
$ vault write database/roles/my-dynamic-certificate-role \
db_name=my-mongodbatlas-database \
creation_statements='{"database_name": "$external", "x509Type": "CUSTOMER", "roles": [{"databaseName":"<db_name>","roleName":"readWrite"}]}' \
default_ttl="1h" \
max_ttl="24h" \
credential_type="client_certificate" \
credential_config=ca_cert="$(cat path/to/ca_cert.pem)" \
credential_config=ca_private_key="$(cat path/to/private_key.pem)" \
credential_config=key_type="rsa" \
credential_config=key_bits=2048 \
credential_config=signature_bits=256 \
credential_config=common_name_template="__"
Success! Data written to: database/roles/my-dynamic-certificate-role
```
1. Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```shell-session
$ vault read database/creds/my-dynamic-certificate-role
Key Value
--- -----
request_id b6556b2d-c379-5a92-465d-6597c506c821
lease_id database/creds/my-dynamic-certificate-role/AZ5tao6NjLJctx7fm1bujKEL
lease_duration 1h
lease_renewable true
client_certificate -----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
private_key -----BEGIN PRIVATE KEY-----
...
-----END PRIVATE KEY-----
private_key_type rsa
username CN=token_my-dynamic-certificate-role_1677262121
```
## Client certificate authentication
MongoDB Atlas supports [X.509 client certificate based authentication](https://www.mongodb.com/docs/manual/tutorial/configure-x509-client-authentication/)
for enhanced authentication security as an alternative to username and password authentication.
The MongoDB Atlas database plugin can be used to manage client certificate credentials for
MongoDB Atlas users by using `client_certificate` [credential_type](/vault/api-docs/secret/databases#credential_type).
See the [usage](/vault/docs/secrets/databases/mongodbatlas#usage) section for examples using dynamic roles.
## API
The full list of configurable options can be seen in the [MongoDB Atlas Database
Plugin HTTP API](/vault/api-docs/secret/databases/mongodbatlas) page.
For more information on the database secrets engine's HTTP API please see the
[Database Secrets Engine API](/vault/api-docs/secret/databases) page. | vault | layout docs page title MongoDB Atlas Database Secrets Engines description MongoDB Atlas is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for MongoDB Atlas databases MongoDB Atlas database secrets engine MongoDB Atlas is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for MongoDB Atlas databases It cannot support rotating the root user s credentials because it uses a public and private key pair to authenticate See the database secrets engine vault docs secrets databases docs for more information about setting up the database secrets engine Note The information below relates to the MongoDB Altas b database plugin b for the Vault database secrets engine Refer to the a href vault docs secrets mongodbatlas MongoDB Atlas secrets engine a for information about using the MongoDB Atlas secrets engine for the Vault Note Capabilities Plugin Name Root Credential Rotation Dynamic Roles Static Roles Username Customization Credential Types mongodbatlas database plugin No Yes Yes Yes 1 8 password client certificate Setup 1 Enable the database secrets engine if it is not already enabled shell session vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure Vault with the proper plugin and connection information shell session vault write database config my mongodbatlas database plugin name mongodbatlas database plugin allowed roles public key jmskfortvf private key ea6acbc7 8a30 4a3f 812e 6f869c08bcd1 project id 4f96cad208574fd14aa8dda3a Usage After the secrets engine is configured and a user machine has a Vault token with the proper permissions it can generate credentials Password credentials 1 Configure a role that maps a name in Vault to a MongoDB Atlas command that executes and creates the database user credential shell session vault write database roles my password role db name my mongodbatlas database creation statements database name admin roles databaseName admin roleName atlasAdmin default ttl 1h max ttl 24h Success Data written to database roles my password role 1 Generate a new credential by reading from the creds endpoint with the name of the role shell session vault read database creds my password role Key Value lease id database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6 lease duration 1h lease renewable true password FBYwnnh fwc0quxtKf11 username v my password role DKbQEg6uRn Each invocation of the command generates a new credential MongoDB Atlas database credentials eventually become consistent when the MongoDB Atlas Admin API https www mongodb com docs atlas reference api resources spec v2 coordinates with hosted clusters in your Atlas project You cannot use the credentials successfully until the consistency process completes If you plan to use MongoDB Atlas credentials in a pipeline you may need to add a time delay or secondary process to account for the time required to establish consistency Client certificate credentials 1 Configure a role that maps a name in Vault to a MongoDB Atlas command that executes and creates the X509 type database user credential shell session vault write database roles my dynamic certificate role db name my mongodbatlas database creation statements database name external x509Type CUSTOMER roles databaseName db name roleName readWrite default ttl 1h max ttl 24h credential type client certificate credential config ca cert cat path to ca cert pem credential config ca private key cat path to private key pem credential config key type rsa credential config key bits 2048 credential config signature bits 256 credential config common name template Success Data written to database roles my dynamic certificate role 1 Generate a new credential by reading from the creds endpoint with the name of the role shell session vault read database creds my dynamic certificate role Key Value request id b6556b2d c379 5a92 465d 6597c506c821 lease id database creds my dynamic certificate role AZ5tao6NjLJctx7fm1bujKEL lease duration 1h lease renewable true client certificate BEGIN CERTIFICATE END CERTIFICATE private key BEGIN PRIVATE KEY END PRIVATE KEY private key type rsa username CN token my dynamic certificate role 1677262121 Client certificate authentication MongoDB Atlas supports X 509 client certificate based authentication https www mongodb com docs manual tutorial configure x509 client authentication for enhanced authentication security as an alternative to username and password authentication The MongoDB Atlas database plugin can be used to manage client certificate credentials for MongoDB Atlas users by using client certificate credential type vault api docs secret databases credential type See the usage vault docs secrets databases mongodbatlas usage section for examples using dynamic roles API The full list of configurable options can be seen in the MongoDB Atlas Database Plugin HTTP API vault api docs secret databases mongodbatlas page For more information on the database secrets engine s HTTP API please see the Database Secrets Engine API vault api docs secret databases page |
vault page title MySQL MariaDB Database Secrets Engines plugin generates database credentials dynamically based on configured roles for the MySQL database layout docs MySQL is one of the supported plugins for the database secrets engine This MySQL MariaDB database secrets engine | ---
layout: docs
page_title: MySQL/MariaDB - Database - Secrets Engines
description: |-
MySQL is one of the supported plugins for the database secrets engine. This
plugin generates database credentials dynamically based on configured roles
for the MySQL database.
---
# MySQL/MariaDB database secrets engine
@include 'x509-sha1-deprecation.mdx'
MySQL is one of the supported plugins for the database secrets engine. This
plugin generates database credentials dynamically based on configured roles for
the MySQL database, and also supports [Static
Roles](/vault/docs/secrets/databases#static-roles).
This plugin has a few different instances built into vault, each instance is for
a slightly different MySQL driver. The only difference between these plugins is
the length of usernames generated by the plugin as different versions of mysql
accept different lengths. The available plugins are:
- mysql-database-plugin
- mysql-aurora-database-plugin
- mysql-rds-database-plugin
- mysql-legacy-database-plugin
See the [database secrets engine](/vault/docs/secrets/databases) docs for
more information about setting up the database secrets engine.
## Capabilities
| Plugin Name | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |
| -------------------------------------------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |
| Depends (see: [above](#mysql-mariadb-database-secrets-engine)) | Yes | Yes | Yes | Yes (1.7+) |
## Setup
1. Enable the database secrets engine if it is not already enabled:
```text
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure Vault with the proper plugin and connection information:
```text
$ vault write database/config/my-mysql-database \
plugin_name=mysql-database-plugin \
connection_url=":@tcp(127.0.0.1:3306)/" \
allowed_roles="my-role" \
username="vaultuser" \
password="vaultpass"
```
1. Configure a role that maps a name in Vault to an SQL statement to execute to
create the database credential:
```text
$ vault write database/roles/my-role \
db_name=my-mysql-database \
creation_statements="CREATE USER ''@'%' IDENTIFIED BY '';GRANT SELECT ON *.* TO ''@'%';" \
default_ttl="1h" \
max_ttl="24h"
Success! Data written to: database/roles/my-role
```
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can generate credentials.
1. Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```text
$ vault read database/creds/my-role
Key Value
--- -----
lease_id database/creds/my-role/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6
lease_duration 1h
lease_renewable true
password yY-57n3X5UQhxnmFRP3f
username v_vaultuser_my-role_crBWVqVh2Hc1
```
## Client x509 certificate authentication
This plugin supports using MySQL's [x509 Client-side Certificate Authentication](https://dev.mysql.com/doc/refman/8.0/en/using-encrypted-connections.html#using-encrypted-connections-client-side-configuration)
To use this authentication mechanism, configure the plugin:
```shell-session
$ vault write database/config/my-mysql-database \
plugin_name=mysql-database-plugin \
allowed_roles="my-role" \
connection_url="user:password@tcp(localhost:3306)/test" \
tls_certificate_key=@/path/to/client.pem \
tls_ca=@/path/to/client.ca
```
Note: `tls_certificate_key` and `tls_ca` map to [`ssl-cert (combined with ssl-key)`](https://dev.mysql.com/doc/refman/8.0/en/connection-options.html#option_general_ssl-cert)
and [`ssl-ca`](https://dev.mysql.com/doc/refman/8.0/en/connection-options.html#option_general_ssl-ca) configuration options
from MySQL with the exception that the Vault parameters are the contents of those files, not filenames. As such,
the two options are independent of each other. See the [MySQL Connection Options](https://dev.mysql.com/doc/refman/8.0/en/connection-options.html)
for more information.
## Examples
### Using wildcards in grant statements
MySQL supports using wildcards in grant statements. These are sometimes needed
by applications which expect access to a large number of databases inside MySQL.
This can be realized by using a wildcard in the grant statement. For example if
you want the user created by Vault to have access to all databases starting with
`fooapp_` you could use the following creation statement:
```text
CREATE USER ''@'%' IDENTIFIED BY ''; GRANT SELECT ON `fooapp\_%`.* TO ''@'%';
```
MySQL expects the part in which the wildcards are to be placed inside backticks.
If you want to add this creation statement to Vault via the Vault CLI you cannot
simply paste the above statement on the CLI because the shell will interpret the
text between the backticks as something that must be executed. The easiest way to
get around this is to encode the creation statement as Base64 and feed this to Vault.
For example:
```shell-session
$ vault write database/roles/my-role \
db_name=mysql \
creation_statements="Q1JFQVRFIFVTRVIgJ3t7bmFtZX19J0AnJScgSURFTlRJRklFRCBCWSAne3twYXNzd29yZH19JzsgR1JBTlQgU0VMRUNUIE9OIGBmb29hcHBcXyVgLiogVE8gJ3t7bmFtZX19J0AnJSc7" \
default_ttl="1h" \
max_ttl="24h"
```
### Rotating root credentials in MySQL 5.6
The default root rotation setup for MySQL uses the `ALTER USER` syntax present
in MySQL 5.7 and up. For MySQL 5.6, the [root rotation
statements](/vault/api-docs/secret/databases#root_rotation_statements)
must be configured to use the old `SET PASSWORD` syntax. For example:
```shell-session
$ vault write database/config/my-mysql-database \
plugin_name=mysql-database-plugin \
connection_url=":@tcp(127.0.0.1:3306)/" \
root_rotation_statements="SET PASSWORD = PASSWORD('')" \
allowed_roles="my-role" \
username="root" \
password="mysql"
```
For a guide in root credential rotation, see [Database Root Credential
Rotation](/vault/tutorials/db-credentials/database-root-rotation).
## API
The full list of configurable options can be seen in the [MySQL database plugin
API](/vault/api-docs/secret/databases/mysql-maria) page.
For more information on the database secrets engine's HTTP API please see the
[Database secrets engine API](/vault/api-docs/secret/databases) page.
## Authenticating to Cloud DBs via IAM
### Google Cloud
Aside from IAM roles denoted by [Google's CloudSQL documentation](https://cloud.google.com/sql/docs/postgres/add-manage-iam-users#creating-a-database-user),
the following SQL privileges are needed by the service account's DB user for minimum functionality with Vault.
Additional privileges may be needed depending on the SQL configured on the database roles.
```sql
-- Enable service account to create users within DB
GRANT SELECT, CREATE, CREATE USER ON <database>.<object> TO "test-user"@"%" WITH GRANT OPTION;
```
### Setup
1. Enable the database secrets engine if it is not already enabled:
```shell-session
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure Vault with the proper plugin and connection information. Here you can explicitly enable GCP IAM authentication
and use [Application Default Credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc#how-to) to authenticate.
~> **Note**: For Google Cloud IAM, the Protocol is `cloudsql-mysql` instead of `tcp`.
```shell-session
$ vault write database/config/my-mysql-database \
plugin_name="mysql-database-plugin" \
allowed_roles="my-role" \
connection_url="user@cloudsql-mysql(project:region:instance)/mysql" \
auth_type="gcp_iam"
```
You can also configure the connection and authenticate by directly passing in the service account credentials
as an encoded JSON string:
```shell-session
$ vault write database/config/my-mysql-database \
plugin_name="mysql-database-plugin" \
allowed_roles="my-role" \
connection_url="user@cloudsql-mysql(project:region:instance)/mysql" \
auth_type="gcp_iam" \
service_account_json="@my_credentials.json"
```
1. Configure a new role in Vault but override the default revocation statements
so Vault will drop the user instead:
```shell-session
$ vault write database/roles/my-role \
db_name=my-mysql-database \
creation_statements="CREATE USER ''@'%' IDENTIFIED BY '';GRANT SELECT ON *.* TO ''@'%';" \
revocation_statements="DROP USER ''@'%';" \
default_ttl="1h" \
max_ttl="24h"
```
1. When you finish configuring the new role, generate credentials as before:
```shell-session
$ vault read database/creds/my-role
Key Value
--- -----
lease_id database/creds/my-role/2f6b629f-7ah2-7b19-24b9-ad879a8d4bf2
lease_duration 1h
lease_renewable true
password vY-57n3X5UQhxnmGTK7g
username v_vaultuser_my-role_frBYNfYh3Kw3
``` | vault | layout docs page title MySQL MariaDB Database Secrets Engines description MySQL is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for the MySQL database MySQL MariaDB database secrets engine include x509 sha1 deprecation mdx MySQL is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for the MySQL database and also supports Static Roles vault docs secrets databases static roles This plugin has a few different instances built into vault each instance is for a slightly different MySQL driver The only difference between these plugins is the length of usernames generated by the plugin as different versions of mysql accept different lengths The available plugins are mysql database plugin mysql aurora database plugin mysql rds database plugin mysql legacy database plugin See the database secrets engine vault docs secrets databases docs for more information about setting up the database secrets engine Capabilities Plugin Name Root Credential Rotation Dynamic Roles Static Roles Username Customization Depends see above mysql mariadb database secrets engine Yes Yes Yes Yes 1 7 Setup 1 Enable the database secrets engine if it is not already enabled text vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure Vault with the proper plugin and connection information text vault write database config my mysql database plugin name mysql database plugin connection url tcp 127 0 0 1 3306 allowed roles my role username vaultuser password vaultpass 1 Configure a role that maps a name in Vault to an SQL statement to execute to create the database credential text vault write database roles my role db name my mysql database creation statements CREATE USER IDENTIFIED BY GRANT SELECT ON TO default ttl 1h max ttl 24h Success Data written to database roles my role Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can generate credentials 1 Generate a new credential by reading from the creds endpoint with the name of the role text vault read database creds my role Key Value lease id database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6 lease duration 1h lease renewable true password yY 57n3X5UQhxnmFRP3f username v vaultuser my role crBWVqVh2Hc1 Client x509 certificate authentication This plugin supports using MySQL s x509 Client side Certificate Authentication https dev mysql com doc refman 8 0 en using encrypted connections html using encrypted connections client side configuration To use this authentication mechanism configure the plugin shell session vault write database config my mysql database plugin name mysql database plugin allowed roles my role connection url user password tcp localhost 3306 test tls certificate key path to client pem tls ca path to client ca Note tls certificate key and tls ca map to ssl cert combined with ssl key https dev mysql com doc refman 8 0 en connection options html option general ssl cert and ssl ca https dev mysql com doc refman 8 0 en connection options html option general ssl ca configuration options from MySQL with the exception that the Vault parameters are the contents of those files not filenames As such the two options are independent of each other See the MySQL Connection Options https dev mysql com doc refman 8 0 en connection options html for more information Examples Using wildcards in grant statements MySQL supports using wildcards in grant statements These are sometimes needed by applications which expect access to a large number of databases inside MySQL This can be realized by using a wildcard in the grant statement For example if you want the user created by Vault to have access to all databases starting with fooapp you could use the following creation statement text CREATE USER IDENTIFIED BY GRANT SELECT ON fooapp TO MySQL expects the part in which the wildcards are to be placed inside backticks If you want to add this creation statement to Vault via the Vault CLI you cannot simply paste the above statement on the CLI because the shell will interpret the text between the backticks as something that must be executed The easiest way to get around this is to encode the creation statement as Base64 and feed this to Vault For example shell session vault write database roles my role db name mysql creation statements Q1JFQVRFIFVTRVIgJ3t7bmFtZX19J0AnJScgSURFTlRJRklFRCBCWSAne3twYXNzd29yZH19JzsgR1JBTlQgU0VMRUNUIE9OIGBmb29hcHBcXyVgLiogVE8gJ3t7bmFtZX19J0AnJSc7 default ttl 1h max ttl 24h Rotating root credentials in MySQL 5 6 The default root rotation setup for MySQL uses the ALTER USER syntax present in MySQL 5 7 and up For MySQL 5 6 the root rotation statements vault api docs secret databases root rotation statements must be configured to use the old SET PASSWORD syntax For example shell session vault write database config my mysql database plugin name mysql database plugin connection url tcp 127 0 0 1 3306 root rotation statements SET PASSWORD PASSWORD allowed roles my role username root password mysql For a guide in root credential rotation see Database Root Credential Rotation vault tutorials db credentials database root rotation API The full list of configurable options can be seen in the MySQL database plugin API vault api docs secret databases mysql maria page For more information on the database secrets engine s HTTP API please see the Database secrets engine API vault api docs secret databases page Authenticating to Cloud DBs via IAM Google Cloud Aside from IAM roles denoted by Google s CloudSQL documentation https cloud google com sql docs postgres add manage iam users creating a database user the following SQL privileges are needed by the service account s DB user for minimum functionality with Vault Additional privileges may be needed depending on the SQL configured on the database roles sql Enable service account to create users within DB GRANT SELECT CREATE CREATE USER ON database object TO test user WITH GRANT OPTION Setup 1 Enable the database secrets engine if it is not already enabled shell session vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure Vault with the proper plugin and connection information Here you can explicitly enable GCP IAM authentication and use Application Default Credentials https cloud google com docs authentication provide credentials adc how to to authenticate Note For Google Cloud IAM the Protocol is cloudsql mysql instead of tcp shell session vault write database config my mysql database plugin name mysql database plugin allowed roles my role connection url user cloudsql mysql project region instance mysql auth type gcp iam You can also configure the connection and authenticate by directly passing in the service account credentials as an encoded JSON string shell session vault write database config my mysql database plugin name mysql database plugin allowed roles my role connection url user cloudsql mysql project region instance mysql auth type gcp iam service account json my credentials json 1 Configure a new role in Vault but override the default revocation statements so Vault will drop the user instead shell session vault write database roles my role db name my mysql database creation statements CREATE USER IDENTIFIED BY GRANT SELECT ON TO revocation statements DROP USER default ttl 1h max ttl 24h 1 When you finish configuring the new role generate credentials as before shell session vault read database creds my role Key Value lease id database creds my role 2f6b629f 7ah2 7b19 24b9 ad879a8d4bf2 lease duration 1h lease renewable true password vY 57n3X5UQhxnmGTK7g username v vaultuser my role frBYNfYh3Kw3 |
vault Note page title IBM Db2 Database Credentials Manage credentials for IBM Db2 using Vault s LDAP secrets engine layout docs IBM Db2 | ---
layout: docs
page_title: IBM Db2 - Database - Credentials
description: |-
Manage credentials for IBM Db2 using Vault's LDAP secrets engine.
---
# IBM Db2
<Note>
Vault supports IBM Db2 credential management using the LDAP secrets engine.
</Note>
Access to Db2 is managed by facilities that reside outside the Db2 database system. By
default, user authentication is completed by a security facility that relies on operating
system based authentication of users and passwords. This means that the lifecycle of user
identities in Db2 aren't capable of being managed using SQL statements and Vault's
database secrets engine.
To provide flexibility in accommodating authentication needs, Db2 ships with authentication
[plugin modules](https://www.ibm.com/docs/en/db2/11.5?topic=ins-ldap-based-authentication-group-lookup-support)
for Lightweight Directory Access Protocol (LDAP). This enables the Db2 database manager to
authenticate users and obtain group membership defined in an LDAP directory, removing the
requirement that users and groups be defined to the operating system.
Vault's [LDAP secrets engine](/vault/docs/secrets/ldap) can be used to manage the lifecycle
of credentials for Db2 environments that have been configured to delegate user authentication
and group membership to an LDAP server. You can use either dynamic credentials
or static credentials with the LDAP secrets engine.
## Before you start
The architecture for implementing this solution is highly context dependent.
The assumptions made in this guide help to provide a practical example of how this _could_
be configured.
Be sure to read the [IBM LDAP plugin documentation](https://www.ibm.com/docs/en/db2/11.5?topic=ins-ldap-based-authentication-group-lookup-support)
to understand the tradeoffs and security implications.
The setup presented in this guide makes the following assumptions:
- **Db2 is configured to authenticate users from an LDAP server using the
[server authentication plugin](https://www.ibm.com/docs/en/db2/11.5?topic=ins-ldap-based-authentication-group-lookup-support#d83944e187)
module.**
- **Db2 is configured to retrieve group membership from an LDAP server using the
[group lookup plugin](https://www.ibm.com/docs/en/db2/11.5?topic=ins-ldap-based-authentication-group-lookup-support#d83944e235)
module.**
- **The LDAP directory information tree (DIT) has the following structure:**
<CodeBlockConfig hideClipboard>
```plaintext
# Organizational units
dn: ou=groups,dc=example,dc=com
objectClass: organizationalUnit
ou: groups
dn: ou=users,dc=example,dc=com
objectClass: organizationalUnit
ou: users
# Db2 groups
# - https://www.ibm.com/docs/en/db2/11.5?topic=unix-db2-users-groups
# - https://www.ibm.com/docs/en/db2/11.5?topic=ins-ldap-based-authentication-group-lookup-support
dn: cn=db2iadm1,ou=groups,dc=example,dc=com
objectClass: groupOfNames
cn: db2iadm1
member: uid=db2inst1,ou=users,dc=example,dc=com
description: DB2 sysadm group
dn: cn=db2fadm1,ou=groups,dc=example,dc=com
objectClass: groupOfNames
cn: db2fadm1
member: uid=db2fenc1,ou=users,dc=example,dc=com
description: DB2 fenced user group
dn: cn=dev,ou=groups,dc=example,dc=com
objectClass: groupOfNames
cn: dev
member: uid=staticuser,ou=users,dc=example,dc=com
description: Development group
# Db2 users
# - https://www.ibm.com/docs/en/db2/11.5?topic=unix-db2-users-groups
# - https://www.ibm.com/docs/en/db2/11.5?topic=ins-ldap-based-authentication-group-lookup-support
dn: uid=db2inst1,ou=users,dc=example,dc=com
objectClass: inetOrgPerson
cn: db2inst1
sn: db2inst1
uid: db2inst1
userPassword: Db2AdminPassword
dn: uid=db2fenc1,ou=users,dc=example,dc=com
objectClass: inetOrgPerson
cn: db2fenc1
sn: db2fenc1
uid: db2fenc1
userPassword: Db2FencedPassword
# Add user for static role rotation
dn: uid=staticuser,ou=users,dc=example,dc=com
objectClass: inetOrgPerson
cn: staticuser
sn: staticuser
uid: staticuser
userPassword: StaticUserPassword
```
</CodeBlockConfig>
- **`IBMLDAPSecurity.ini` is updated to match the LDAP server configuration.**
## Setup
<Tabs>
<Tab heading="Dynamic credentials" group="dynamic">
1. Enable the LDAP secrets engine.
```shell-session
$ vault secrets enable ldap
```
1. Configure the LDAP secrets engine.
```shell-session
$ vault write ldap/config \
binddn="cn=admin,dc=example,dc=com" \
bindpass="LDAPAdminPassword" \
url="ldap://127.0.0.1:389"
```
1. Write a template file that defines how to create LDAP users.
```shell-session
$ cat > /tmp/creation.ldif <<EOF
dn: uid=,ou=users,dc=example,dc=com
objectClass: inetOrgPerson
uid:
cn:
sn:
userPassword:
EOF
```
This file will be used by Vault to create LDAP users when credentials are requested.
1. Write a template file that defines how to delete LDAP users.
```shell-session
$ cat > /tmp/deletion_rollback.ldif <<EOF
dn: uid=,ou=users,dc=example,dc=com
changetype: delete
EOF
```
This file will be used by Vault to delete LDAP users when the credentials are
revoked.
1. Create a Vault role that includes `creation.ldif` and
`deletion_rollback.ldif`
```shell-session
$ vault write ldap/role/dynamic \
creation_ldif=@/tmp/creation.ldif \
deletion_ldif=@/tmp/deletion_rollback.ldif \
rollback_ldif=@/tmp/deletion_rollback.ldif \
default_ttl=1h
```
</Tab>
<Tab heading="Static credentials" group="static">
1. Enable the LDAP secrets engine.
```shell-session
$ vault secrets enable ldap
```
1. Configure the LDAP secrets engine.
```shell-session
$ vault write ldap/config \
binddn="cn=admin,dc=example,dc=com" \
bindpass="LDAPAdminPassword" \
url="ldap://127.0.0.1:389"
```
1. Create a static role that maps a name in Vault to an entry in an LDAP directory.
```shell-session
$ vault write ldap/static-role/static \
username='staticuser' \
dn='uid=staticuser,ou=users,dc=example,dc=com' \
rotation_period="1h"
```
</Tab>
</Tabs>
## Usage
<Tabs>
<Tab heading="Dynamic credentials" group="dynamic">
Generate dynamic credentials using the Vault `dynamic` role.
```shell-session
$ vault read ldap/creds/dynamic
```
**Successful output:**
<CodeBlockConfig hideClipboard>
```shell-session
Key Value
--- -----
lease_id ldap/creds/dynamic/doa187ysuFExnvsJwmt8WrNo
lease_duration 1h
lease_renewable true
distinguished_names [uid=v_token_dynamic_joctelE9RB_1647220296,ou=users,dc=example,dc=com]
password 3WAOcuHUUt3qMKaUqo14pfTWapiOt8fmcBNoDo7Rx1R9dKxMOMVoMR3MYjCxQvmL
username v_token_dynamic_joctelE9RB_1647220296
```
</CodeBlockConfig>
Use the dynamic credentials to connect to Db2.
</Tab>
<Tab heading="Static credentials" group="static">
Read the rotated password of the LDAP user that was used in the static role.
```shell-session
$ vault read ldap/static-cred/static
```
**Successful output:**
<CodeBlockConfig hideClipboard>
```shell-session
Key Value
--- -----
dn uid=staticuser,ou=users,dc=example,dc=com
last_vault_rotation 2022-03-14T11:56:15.252772-07:00
password VWpUznJ0IcaYbHbnyqwBuJhsfb9YTe5MzwePR9oTkkrs26GhGKZ7dD5HuULpFfri
rotation_period 1h
ttl 59m55s
username staticuser
```
</CodeBlockConfig>
Use the rotated credentials for `staticuser` to connect to Db2.
</Tab>
</Tabs>
## Tutorial
Refer to the [LDAP Secrets Engine tutorial](/vault/tutorials/secrets-management/openldap) to learn how to configure and use the LDAP secrets engine.
## API
The LDAP secrets engine has a full HTTP API. Please see the [LDAP secrets engine API docs](/vault/api-docs/secret/ldap) for more details | vault | layout docs page title IBM Db2 Database Credentials description Manage credentials for IBM Db2 using Vault s LDAP secrets engine IBM Db2 Note Vault supports IBM Db2 credential management using the LDAP secrets engine Note Access to Db2 is managed by facilities that reside outside the Db2 database system By default user authentication is completed by a security facility that relies on operating system based authentication of users and passwords This means that the lifecycle of user identities in Db2 aren t capable of being managed using SQL statements and Vault s database secrets engine To provide flexibility in accommodating authentication needs Db2 ships with authentication plugin modules https www ibm com docs en db2 11 5 topic ins ldap based authentication group lookup support for Lightweight Directory Access Protocol LDAP This enables the Db2 database manager to authenticate users and obtain group membership defined in an LDAP directory removing the requirement that users and groups be defined to the operating system Vault s LDAP secrets engine vault docs secrets ldap can be used to manage the lifecycle of credentials for Db2 environments that have been configured to delegate user authentication and group membership to an LDAP server You can use either dynamic credentials or static credentials with the LDAP secrets engine Before you start The architecture for implementing this solution is highly context dependent The assumptions made in this guide help to provide a practical example of how this could be configured Be sure to read the IBM LDAP plugin documentation https www ibm com docs en db2 11 5 topic ins ldap based authentication group lookup support to understand the tradeoffs and security implications The setup presented in this guide makes the following assumptions Db2 is configured to authenticate users from an LDAP server using the server authentication plugin https www ibm com docs en db2 11 5 topic ins ldap based authentication group lookup support d83944e187 module Db2 is configured to retrieve group membership from an LDAP server using the group lookup plugin https www ibm com docs en db2 11 5 topic ins ldap based authentication group lookup support d83944e235 module The LDAP directory information tree DIT has the following structure CodeBlockConfig hideClipboard plaintext Organizational units dn ou groups dc example dc com objectClass organizationalUnit ou groups dn ou users dc example dc com objectClass organizationalUnit ou users Db2 groups https www ibm com docs en db2 11 5 topic unix db2 users groups https www ibm com docs en db2 11 5 topic ins ldap based authentication group lookup support dn cn db2iadm1 ou groups dc example dc com objectClass groupOfNames cn db2iadm1 member uid db2inst1 ou users dc example dc com description DB2 sysadm group dn cn db2fadm1 ou groups dc example dc com objectClass groupOfNames cn db2fadm1 member uid db2fenc1 ou users dc example dc com description DB2 fenced user group dn cn dev ou groups dc example dc com objectClass groupOfNames cn dev member uid staticuser ou users dc example dc com description Development group Db2 users https www ibm com docs en db2 11 5 topic unix db2 users groups https www ibm com docs en db2 11 5 topic ins ldap based authentication group lookup support dn uid db2inst1 ou users dc example dc com objectClass inetOrgPerson cn db2inst1 sn db2inst1 uid db2inst1 userPassword Db2AdminPassword dn uid db2fenc1 ou users dc example dc com objectClass inetOrgPerson cn db2fenc1 sn db2fenc1 uid db2fenc1 userPassword Db2FencedPassword Add user for static role rotation dn uid staticuser ou users dc example dc com objectClass inetOrgPerson cn staticuser sn staticuser uid staticuser userPassword StaticUserPassword CodeBlockConfig IBMLDAPSecurity ini is updated to match the LDAP server configuration Setup Tabs Tab heading Dynamic credentials group dynamic 1 Enable the LDAP secrets engine shell session vault secrets enable ldap 1 Configure the LDAP secrets engine shell session vault write ldap config binddn cn admin dc example dc com bindpass LDAPAdminPassword url ldap 127 0 0 1 389 1 Write a template file that defines how to create LDAP users shell session cat tmp creation ldif EOF dn uid ou users dc example dc com objectClass inetOrgPerson uid cn sn userPassword EOF This file will be used by Vault to create LDAP users when credentials are requested 1 Write a template file that defines how to delete LDAP users shell session cat tmp deletion rollback ldif EOF dn uid ou users dc example dc com changetype delete EOF This file will be used by Vault to delete LDAP users when the credentials are revoked 1 Create a Vault role that includes creation ldif and deletion rollback ldif shell session vault write ldap role dynamic creation ldif tmp creation ldif deletion ldif tmp deletion rollback ldif rollback ldif tmp deletion rollback ldif default ttl 1h Tab Tab heading Static credentials group static 1 Enable the LDAP secrets engine shell session vault secrets enable ldap 1 Configure the LDAP secrets engine shell session vault write ldap config binddn cn admin dc example dc com bindpass LDAPAdminPassword url ldap 127 0 0 1 389 1 Create a static role that maps a name in Vault to an entry in an LDAP directory shell session vault write ldap static role static username staticuser dn uid staticuser ou users dc example dc com rotation period 1h Tab Tabs Usage Tabs Tab heading Dynamic credentials group dynamic Generate dynamic credentials using the Vault dynamic role shell session vault read ldap creds dynamic Successful output CodeBlockConfig hideClipboard shell session Key Value lease id ldap creds dynamic doa187ysuFExnvsJwmt8WrNo lease duration 1h lease renewable true distinguished names uid v token dynamic joctelE9RB 1647220296 ou users dc example dc com password 3WAOcuHUUt3qMKaUqo14pfTWapiOt8fmcBNoDo7Rx1R9dKxMOMVoMR3MYjCxQvmL username v token dynamic joctelE9RB 1647220296 CodeBlockConfig Use the dynamic credentials to connect to Db2 Tab Tab heading Static credentials group static Read the rotated password of the LDAP user that was used in the static role shell session vault read ldap static cred static Successful output CodeBlockConfig hideClipboard shell session Key Value dn uid staticuser ou users dc example dc com last vault rotation 2022 03 14T11 56 15 252772 07 00 password VWpUznJ0IcaYbHbnyqwBuJhsfb9YTe5MzwePR9oTkkrs26GhGKZ7dD5HuULpFfri rotation period 1h ttl 59m55s username staticuser CodeBlockConfig Use the rotated credentials for staticuser to connect to Db2 Tab Tabs Tutorial Refer to the LDAP Secrets Engine tutorial vault tutorials secrets management openldap to learn how to configure and use the LDAP secrets engine API The LDAP secrets engine has a full HTTP API Please see the LDAP secrets engine API docs vault api docs secret ldap for more details |
vault plugin interface There are a number of built in database types and an exposed framework for running custom database types for extendability on configured roles It works with a number of different databases through a layout docs The database secrets engine generates database credentials dynamically based page title Database Secrets Engines | ---
layout: docs
page_title: Database - Secrets Engines
description: |-
The database secrets engine generates database credentials dynamically based
on configured roles. It works with a number of different databases through a
plugin interface. There are a number of built-in database types and an exposed
framework for running custom database types for extendability.
---
# Databases
The database secrets engine generates database credentials dynamically based on
configured roles. It works with a number of different databases through a plugin
interface. There are a number of built-in database types, and an exposed framework
for running custom database types for extendability. This means that services
that need to access a database no longer need to hardcode credentials: they can
request them from Vault, and use Vault's [leasing mechanism](/vault/docs/concepts/lease)
to more easily roll keys. These are referred to as "dynamic roles" or "dynamic
secrets".
Since every service is accessing the database with unique credentials, it makes
auditing much easier when questionable data access is discovered. You can track
it down to the specific instance of a service based on the SQL username.
Vault makes use of its own internal revocation system to ensure that users
become invalid within a reasonable time of the lease expiring.
### Static roles
Vault also supports **static roles** for all database secrets engines. Static
roles are a 1-to-1 mapping of Vault roles to usernames in a database. With
static roles, Vault stores and automatically rotates passwords for the
associated database user based on a configurable period of time or rotation
schedule.
When a client requests credentials for the static role, Vault returns the
current password for whichever database user is mapped to the requested role.
With static roles, anyone with the proper Vault policies can access the
associated user account in the database.
<Warning title="Do not use static roles for root database credentials">
Do not manage the same root database credentials that you provide to Vault in
<tt>config/</tt> with static roles.
Vault does not distinguish between standard credentials and root credentials
when rotating passwords. If you assign your root credentials to a static
role, any dynamic or static users managed by that database configuration will
fail after rotation because the password for <tt>config/</tt> is no longer
valid.
If you need to rotate root credentials, use the
[Rotate root credentials](/vault/api-docs/secret/databases#rotate-root-credentials)
API endpoint.
</Warning>
## Setup
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. Enable the database secrets engine:
```shell-session
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure Vault with the proper plugin and connection information:
```shell-session
$ vault write database/config/my-database \
plugin_name="..." \
connection_url="..." \
allowed_roles="..." \
username="..." \
password="..." \
```
~> It is highly recommended a user within the database is created
specifically for Vault to use. This user will be used to manipulate
dynamic and static users within the database. This user is called the
"root" user within the documentation.
Vault will use the user specified here to create/update/revoke database
credentials. That user must have the appropriate permissions to perform
actions upon other database users (create, update credentials, delete, etc.).
This secrets engine can configure multiple database connections. For details
on the specific configuration options, please see the database-specific
documentation.
1. After configuring the root user, it is highly recommended you rotate that user's
password such that the vault user is not accessible by any users other than
Vault itself:
```shell-session
$ vault write -force database/rotate-root/my-database
```
!> When this is done, the password for the user specified in the previous step
is no longer accessible. Because of this, it is highly recommended that a
user is created specifically for Vault to use to manage database
users.
1. Configure a role that maps a name in Vault to a set of creation statements to
create the database credential:
```shell-session
$ vault write database/roles/my-role \
db_name=my-database \
creation_statements="..." \
default_ttl="1h" \
max_ttl="24h"
Success! Data written to: database/roles/my-role
```
The `` and `` fields will be populated by the plugin
with dynamically generated values. In some plugins the `` field is also supported.
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can generate credentials.
1. Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```shell-session
$ vault read database/creds/my-role
Key Value
--- -----
lease_id database/creds/my-role/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6
lease_duration 1h
lease_renewable true
password FSREZ1S0kFsZtLat-y94
username v-vaultuser-e2978cd0-ugp7iqI2hdlff5hfjylJ-1602537260
```
## Database capabilities
As of Vault 1.6, all databases support dynamic roles and static roles. All plugins except MongoDB Atlas support rotating
the root user's credentials. MongoDB Atlas cannot support rotating the root user's credentials because it uses a public
and private key pair to authenticate.
<a id="db-capabilities-table" />
| Database | UI support | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization | Credential Types |
| ------------------------------------------------------------------- | ---------- | ------------------------ | ------------- | ------------ | ---------------------- | ---------------------------- |
| [Cassandra](/vault/docs/secrets/databases/cassandra) | No | Yes | Yes | Yes (1.6+) | Yes (1.7+) | password |
| [Couchbase](/vault/docs/secrets/databases/couchbase) | No | Yes | Yes | Yes | Yes (1.7+) | password |
| [Elasticsearch](/vault/docs/secrets/databases/elasticdb) | Yes (1.9+) | Yes | Yes | Yes (1.6+) | Yes (1.8+) | password |
| [HanaDB](/vault/docs/secrets/databases/hanadb) | No | Yes (1.6+) | Yes | Yes (1.6+) | Yes (1.12+) | password |
| [InfluxDB](/vault/docs/secrets/databases/influxdb) | No | Yes | Yes | Yes (1.6+) | Yes (1.8+) | password |
| [MongoDB](/vault/docs/secrets/databases/mongodb) | Yes (1.7+) | Yes | Yes | Yes | Yes (1.7+) | password |
| [MongoDB Atlas](/vault/docs/secrets/databases/mongodbatlas) | No | No | Yes | Yes | Yes (1.8+) | password, client_certificate |
| [MSSQL](/vault/docs/secrets/databases/mssql) | Yes (1.8+) | Yes | Yes | Yes | Yes (1.7+) | password |
| [MySQL/MariaDB](/vault/docs/secrets/databases/mysql-maria) | Yes (1.8+) | Yes | Yes | Yes | Yes (1.7+) | password, gcp_iam |
| [Oracle](/vault/docs/secrets/databases/oracle) | Yes (1.9+) | Yes | Yes | Yes | Yes (1.7+) | password |
| [PostgreSQL](/vault/docs/secrets/databases/postgresql) | Yes (1.9+) | Yes | Yes | Yes | Yes (1.7+) | password, gcp_iam |
| [Redis](/vault/docs/secrets/databases/redis) | No | Yes | Yes | Yes | No | password |
| [Redis ElastiCache](/vault/docs/secrets/databases/rediselasticache) | No | No | No | Yes | No | password |
| [Redshift](/vault/docs/secrets/databases/redshift) | No | Yes | Yes | Yes | Yes (1.8+) | password |
| [Snowflake](/vault/docs/secrets/databases/snowflake) | No | Yes | Yes | Yes | Yes (1.8+) | password, rsa_private_key |
## Custom plugins
This secrets engine allows custom database types to be run through the exposed
plugin interface. Please see the [custom database plugin](/vault/docs/secrets/databases/custom)
for more information.
## Credential types
Database systems support a variety of authentication methods and credential types.
The database secrets engine supports management of credentials alternative to usernames
and passwords. The [credential_type](/vault/api-docs/secret/databases#credential_type)
and [credential_config](/vault/api-docs/secret/databases#credential_config) parameters
of dynamic and static roles configure the credential that Vault will generate and
make available to database plugins. See the documentation of individual database
plugins for the credential types they support and usage examples.
## Schedule-based static role rotation
The database secrets engine supports configuring schedule-based automatic
credential rotation for static roles with the
[rotation_schedule](/vault/api-docs/secret/databases#rotation_schedule) field.
For example:
```shell-session
$ vault write database/static-roles/my-role \
db_name=my-database \
username="vault" \
rotation_schedule="0 * * * SAT"
```
This configuration will set the role's credential rotation to occur on Saturday
at 00:00.
Additionally, this schedule-based approach allows for optionally configuring a
[rotation_window](/vault/api-docs/secret/databases#rotation_window) in which
the automatic rotation is allowed to occur. For example:
```shell-session
$ vault write database/static-roles/my-role \
db_name=my-database \
username="vault" \
rotation_window="1h" \
rotation_schedule="0 * * * SAT"
```
This configuration will set rotations to occur on Saturday at 00:00. The 1
hour `rotation_window` will prevent the rotation from occuring after 01:00. If
the static role's credential is not rotated during this window, due to a failure
or otherwise, it will not be rotated until the next scheduled rotation.
!> The `rotation_period` and `rotation_schedule` fields are
mutually exclusive. One of them must be set but not both.
## Password generation
Passwords are generated via [Password Policies](/vault/docs/concepts/password-policies).
Databases can optionally set a password policy for use across all roles or at the
individual role level for that database. For example, each time you call
`vault write database/config/my-database` you can specify a password policy for all
roles using `my-database`. Each database has a default password policy defined as:
20 characters with at least 1 uppercase character, at least 1 lowercase character,
at least 1 number, and at least 1 dash character.
The default password generation can be represented as the following password policy:
```hcl
length = 20
rule "charset" {
charset = "abcdefghijklmnopqrstuvwxyz"
min-chars = 1
}
rule "charset" {
charset = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
min-chars = 1
}
rule "charset" {
charset = "0123456789"
min-chars = 1
}
rule "charset" {
charset = "-"
min-chars = 1
}
```
## Disable character escaping
As of Vault 1.10, you can now specify the option `disable_escaping` with a value of `true ` in
some secrets engines to prevent Vault from escaping special characters in the username and password
fields. This is necessary for some alternate connection string formats, such as ADO with MSSQL or Azure
SQL. See the [databases secrets engine API docs](/vault/api-docs/secret/databases#common-fields) and reference
individual plugin documentation to determine support for this parameter.
For example, when the password contains URL-escaped characters like `#` or `%` they will
remain as so instead of becoming `%23` and `%25` respectively.
```shell-session
$ vault write database/config/my-mssql-database \
plugin_name="mssql-database-plugin" \
connection_url='server=localhost;port=1433;user id=;password=;database=mydb;' \
username="root" \
password='your#StrongPassword%' \
disable_escaping="true"
```
## Unsupported databases
### AWS DynamoDB
Amazon Web Services (AWS) DynamoDB is a fully managed, serverless, key-value NoSQL database service. While
DynamoDB is not supported by the database secrets engine, you can use the [AWS secrets engine](/vault/docs/secrets/aws)
to provision dynamic credentials capable of accessing DynamoDB.
1. Verify you have the AWS secrets engine enabled and configured.
1. Create a role with the necessary permissions for your users to access DynamoDB. For example:
```shell-session
$ vault write aws/roles/aws-dynamodb-read \
credential_type=iam_user \
policy_document=-<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:GetItem",
"dynamodb:GetRecords"
],
"Resource": "arn:aws:dynamodb:us-east-1:1234567891:table/example-table"
},
{
"Effect": "Allow",
"Action": "dynamodb:ListTables",
"Resource": "*"
}
]
}
EOF
```
1. Generate dynamic credentials for DynamoDB using the `aws-dynamodb-read` role:
```shell-session
$ vault read aws/creds/aws-dynamodb-read
Key Value
--- -----
lease_id aws/creds/my-role/kbSnl9WSDzOXQerd8GiVh75N.DACNl
lease_duration 1h
lease_renewable true
access_key AKALMNOP123456
secret_key xY4XhS3AsM3s+R33tCaybsT2XI6BVL+vF+khbbYD
security_token <nil>
```
1. Use the dynamic credentials generated by Vault to access DynamoDB. For example, to connect with the
the [AWS CLI](https://docs.aws.amazon.com/cli/latest/reference/dynamodb/).
```shell-session
$ aws dynamodb list-tables --region us-east-1
{
"TableNames": [
"example-table"
]
}
```
## Tutorial
Refer to the following step-by-step tutorials for more information:
- [Secrets as a Service: Dynamic Secrets](/vault/tutorials/db-credentials/database-secrets)
- [Database Root Credential Rotation](/vault/tutorials/db-credentials/database-root-rotation)
## API
The database secrets engine has a full HTTP API. Please see the [Database secret
secrets engine API](/vault/api-docs/secret/databases) for more details. | vault | layout docs page title Database Secrets Engines description The database secrets engine generates database credentials dynamically based on configured roles It works with a number of different databases through a plugin interface There are a number of built in database types and an exposed framework for running custom database types for extendability Databases The database secrets engine generates database credentials dynamically based on configured roles It works with a number of different databases through a plugin interface There are a number of built in database types and an exposed framework for running custom database types for extendability This means that services that need to access a database no longer need to hardcode credentials they can request them from Vault and use Vault s leasing mechanism vault docs concepts lease to more easily roll keys These are referred to as dynamic roles or dynamic secrets Since every service is accessing the database with unique credentials it makes auditing much easier when questionable data access is discovered You can track it down to the specific instance of a service based on the SQL username Vault makes use of its own internal revocation system to ensure that users become invalid within a reasonable time of the lease expiring Static roles Vault also supports static roles for all database secrets engines Static roles are a 1 to 1 mapping of Vault roles to usernames in a database With static roles Vault stores and automatically rotates passwords for the associated database user based on a configurable period of time or rotation schedule When a client requests credentials for the static role Vault returns the current password for whichever database user is mapped to the requested role With static roles anyone with the proper Vault policies can access the associated user account in the database Warning title Do not use static roles for root database credentials Do not manage the same root database credentials that you provide to Vault in tt config tt with static roles Vault does not distinguish between standard credentials and root credentials when rotating passwords If you assign your root credentials to a static role any dynamic or static users managed by that database configuration will fail after rotation because the password for tt config tt is no longer valid If you need to rotate root credentials use the Rotate root credentials vault api docs secret databases rotate root credentials API endpoint Warning Setup Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the database secrets engine shell session vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure Vault with the proper plugin and connection information shell session vault write database config my database plugin name connection url allowed roles username password It is highly recommended a user within the database is created specifically for Vault to use This user will be used to manipulate dynamic and static users within the database This user is called the root user within the documentation Vault will use the user specified here to create update revoke database credentials That user must have the appropriate permissions to perform actions upon other database users create update credentials delete etc This secrets engine can configure multiple database connections For details on the specific configuration options please see the database specific documentation 1 After configuring the root user it is highly recommended you rotate that user s password such that the vault user is not accessible by any users other than Vault itself shell session vault write force database rotate root my database When this is done the password for the user specified in the previous step is no longer accessible Because of this it is highly recommended that a user is created specifically for Vault to use to manage database users 1 Configure a role that maps a name in Vault to a set of creation statements to create the database credential shell session vault write database roles my role db name my database creation statements default ttl 1h max ttl 24h Success Data written to database roles my role The and fields will be populated by the plugin with dynamically generated values In some plugins the field is also supported Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can generate credentials 1 Generate a new credential by reading from the creds endpoint with the name of the role shell session vault read database creds my role Key Value lease id database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6 lease duration 1h lease renewable true password FSREZ1S0kFsZtLat y94 username v vaultuser e2978cd0 ugp7iqI2hdlff5hfjylJ 1602537260 Database capabilities As of Vault 1 6 all databases support dynamic roles and static roles All plugins except MongoDB Atlas support rotating the root user s credentials MongoDB Atlas cannot support rotating the root user s credentials because it uses a public and private key pair to authenticate a id db capabilities table Database UI support Root Credential Rotation Dynamic Roles Static Roles Username Customization Credential Types Cassandra vault docs secrets databases cassandra No Yes Yes Yes 1 6 Yes 1 7 password Couchbase vault docs secrets databases couchbase No Yes Yes Yes Yes 1 7 password Elasticsearch vault docs secrets databases elasticdb Yes 1 9 Yes Yes Yes 1 6 Yes 1 8 password HanaDB vault docs secrets databases hanadb No Yes 1 6 Yes Yes 1 6 Yes 1 12 password InfluxDB vault docs secrets databases influxdb No Yes Yes Yes 1 6 Yes 1 8 password MongoDB vault docs secrets databases mongodb Yes 1 7 Yes Yes Yes Yes 1 7 password MongoDB Atlas vault docs secrets databases mongodbatlas No No Yes Yes Yes 1 8 password client certificate MSSQL vault docs secrets databases mssql Yes 1 8 Yes Yes Yes Yes 1 7 password MySQL MariaDB vault docs secrets databases mysql maria Yes 1 8 Yes Yes Yes Yes 1 7 password gcp iam Oracle vault docs secrets databases oracle Yes 1 9 Yes Yes Yes Yes 1 7 password PostgreSQL vault docs secrets databases postgresql Yes 1 9 Yes Yes Yes Yes 1 7 password gcp iam Redis vault docs secrets databases redis No Yes Yes Yes No password Redis ElastiCache vault docs secrets databases rediselasticache No No No Yes No password Redshift vault docs secrets databases redshift No Yes Yes Yes Yes 1 8 password Snowflake vault docs secrets databases snowflake No Yes Yes Yes Yes 1 8 password rsa private key Custom plugins This secrets engine allows custom database types to be run through the exposed plugin interface Please see the custom database plugin vault docs secrets databases custom for more information Credential types Database systems support a variety of authentication methods and credential types The database secrets engine supports management of credentials alternative to usernames and passwords The credential type vault api docs secret databases credential type and credential config vault api docs secret databases credential config parameters of dynamic and static roles configure the credential that Vault will generate and make available to database plugins See the documentation of individual database plugins for the credential types they support and usage examples Schedule based static role rotation The database secrets engine supports configuring schedule based automatic credential rotation for static roles with the rotation schedule vault api docs secret databases rotation schedule field For example shell session vault write database static roles my role db name my database username vault rotation schedule 0 SAT This configuration will set the role s credential rotation to occur on Saturday at 00 00 Additionally this schedule based approach allows for optionally configuring a rotation window vault api docs secret databases rotation window in which the automatic rotation is allowed to occur For example shell session vault write database static roles my role db name my database username vault rotation window 1h rotation schedule 0 SAT This configuration will set rotations to occur on Saturday at 00 00 The 1 hour rotation window will prevent the rotation from occuring after 01 00 If the static role s credential is not rotated during this window due to a failure or otherwise it will not be rotated until the next scheduled rotation The rotation period and rotation schedule fields are mutually exclusive One of them must be set but not both Password generation Passwords are generated via Password Policies vault docs concepts password policies Databases can optionally set a password policy for use across all roles or at the individual role level for that database For example each time you call vault write database config my database you can specify a password policy for all roles using my database Each database has a default password policy defined as 20 characters with at least 1 uppercase character at least 1 lowercase character at least 1 number and at least 1 dash character The default password generation can be represented as the following password policy hcl length 20 rule charset charset abcdefghijklmnopqrstuvwxyz min chars 1 rule charset charset ABCDEFGHIJKLMNOPQRSTUVWXYZ min chars 1 rule charset charset 0123456789 min chars 1 rule charset charset min chars 1 Disable character escaping As of Vault 1 10 you can now specify the option disable escaping with a value of true in some secrets engines to prevent Vault from escaping special characters in the username and password fields This is necessary for some alternate connection string formats such as ADO with MSSQL or Azure SQL See the databases secrets engine API docs vault api docs secret databases common fields and reference individual plugin documentation to determine support for this parameter For example when the password contains URL escaped characters like or they will remain as so instead of becoming 23 and 25 respectively shell session vault write database config my mssql database plugin name mssql database plugin connection url server localhost port 1433 user id password database mydb username root password your StrongPassword disable escaping true Unsupported databases AWS DynamoDB Amazon Web Services AWS DynamoDB is a fully managed serverless key value NoSQL database service While DynamoDB is not supported by the database secrets engine you can use the AWS secrets engine vault docs secrets aws to provision dynamic credentials capable of accessing DynamoDB 1 Verify you have the AWS secrets engine enabled and configured 1 Create a role with the necessary permissions for your users to access DynamoDB For example shell session vault write aws roles aws dynamodb read credential type iam user policy document EOF Version 2012 10 17 Statement Effect Allow Action dynamodb DescribeTable dynamodb GetItem dynamodb GetRecords Resource arn aws dynamodb us east 1 1234567891 table example table Effect Allow Action dynamodb ListTables Resource EOF 1 Generate dynamic credentials for DynamoDB using the aws dynamodb read role shell session vault read aws creds aws dynamodb read Key Value lease id aws creds my role kbSnl9WSDzOXQerd8GiVh75N DACNl lease duration 1h lease renewable true access key AKALMNOP123456 secret key xY4XhS3AsM3s R33tCaybsT2XI6BVL vF khbbYD security token nil 1 Use the dynamic credentials generated by Vault to access DynamoDB For example to connect with the the AWS CLI https docs aws amazon com cli latest reference dynamodb shell session aws dynamodb list tables region us east 1 TableNames example table Tutorial Refer to the following step by step tutorials for more information Secrets as a Service Dynamic Secrets vault tutorials db credentials database secrets Database Root Credential Rotation vault tutorials db credentials database root rotation API The database secrets engine has a full HTTP API Please see the Database secret secrets engine API vault api docs secret databases for more details |
vault write your own code to generate credentials in any database you wish It also allows databases that require dynamically linked libraries to be used as plugins while keeping Vault itself statically linked layout docs The database secrets engine allows new functionality to be added through a plugin interface without needing to modify Vault s core code This allows you page title Custom Database Secrets Engines | ---
layout: docs
page_title: Custom - Database - Secrets Engines
description: |-
The database secrets engine allows new functionality to be added through a
plugin interface without needing to modify Vault's core code. This allows you
write your own code to generate credentials in any database you wish. It also
allows databases that require dynamically linked libraries to be used as
plugins while keeping Vault itself statically linked.
---
# Custom database secrets engines
~> The interface for custom database plugins has changed in Vault 1.6. Vault will
continue to recognize the now deprecated version of this interface for some time.
If you are using a plugin with the deprecated interface, you should upgrade to the
newest version. See [Upgrading database plugins](#upgrading-database-plugins)
for more details.
~> **Advanced topic!** Plugin development is a highly advanced topic in Vault,
and is not required knowledge for day-to-day usage. If you don't plan on writing
any plugins, feel free to skip this section of the documentation.
The database secrets engine allows new functionality to be added through a
plugin interface without needing to modify Vault's core code. This allows you
write your own code to generate credentials in any database you wish. It also
allows databases that require dynamically linked libraries to be used as plugins
while keeping Vault itself statically linked.
Please read the [Plugins internals](/vault/docs/plugins) docs for more
information about the plugin system before getting started building your
Database plugin.
Database plugins can be made to implement
[plugin multiplexing](/vault/docs/plugins/plugin-architecture#plugin-multiplexing)
which allows a single plugin process to be used for multiple database
connections. To enable multiplexing, the plugin must be compiled with the
`ServeMultiplex` function call from Vault's `dbplugin` package.
## Plugin interface
All plugins for the database secrets engine must implement the same interface. This interface
is found in `sdk/database/dbplugin/v5/database.go`
```go
type Database interface {
// Initialize the database plugin. This is the equivalent of a constructor for the
// database object itself.
Initialize(ctx context.Context, req InitializeRequest) (InitializeResponse, error)
// NewUser creates a new user within the database. This user is temporary in that it
// will exist until the TTL expires.
NewUser(ctx context.Context, req NewUserRequest) (NewUserResponse, error)
// UpdateUser updates an existing user within the database.
UpdateUser(ctx context.Context, req UpdateUserRequest) (UpdateUserResponse, error)
// DeleteUser from the database. This should not error if the user didn't
// exist prior to this call.
DeleteUser(ctx context.Context, req DeleteUserRequest) (DeleteUserResponse, error)
// Type returns the Name for the particular database backend implementation.
// This type name is usually set as a constant within the database backend
// implementation, e.g. "mysql" for the MySQL database backend. This is used
// for things like metrics and logging. No behavior is switched on this.
Type() (string, error)
// Close attempts to close the underlying database connection that was
// established by the backend.
Close() error
}
```
Each of the request and response objects can also be found in `sdk/database/dbplugin/v5/database.go`.
In each of the requests, you will see at least 1 `Statements` object (in `UpdateUserRequest`
they are in sub-fields). This object represents the set of commands to run for that particular
operation. For the `NewUser` function, this is a set of commands to create the user (and often
set permissions for that user). These statements are from the following fields in the API:
| API Argument | Request Object |
| -------------------------- | -------------------------------------------------- |
| `creation_statements` | `NewUserRequest.Statements.Commands` |
| `revocation_statements` | `DeleteUserRequest.Statements.Commands` |
| `rollback_statements` | `NewUserRequest.RollbackStatements.Commands` |
| `renew_statements` | `UpdateUserRequest.Expiration.Statements.Commands` |
| `rotation_statements` | `UpdateUserRequest.Password.Statements.Commands` |
| `root_rotation_statements` | `UpdateUserRequest.Password.Statements.Commands` |
In many of the built-in plugins, they replace `` (or ``), ``,
and/or `` with the associated values. It is up to your plugin to perform these
string replacements. There is a helper function located in `sdk/database/helper/dbutil`
called `QueryHelper` that assists in doing this string replacement. You are not required to
use it, but it will make your plugin's behavior consistent with the built-in plugins.
The `InitializeRequest` object contains a map of keys to values. This data is what the
user specified as the configuration for the plugin. Your plugin should use this
data to make connections to the database. The response object contains a similar configuration
map. The response object should contain the configuration map that should be saved within Vault.
This allows the plugin to manipulate the configuration prior to saving it.
It is also passed a boolean value (`InitializeRequest.VerifyConnection`) indicating if your
plugin should initialize a connection to the database during the `Initialize` call. This
function is called when the configuration is written. This allows the user to know whether
the configuration is valid and able to connect to the database in question. If this is set to
false, no connection should be made during the `Initialize` call, but subsequent calls to the
other functions will need to open a connection.
## Serving a plugin
### Serving a plugin with multiplexing
~> Plugin multiplexing requires `github.com/hashicorp/vault/sdk v0.4.0` or above.
The plugin runs as a separate binary outside of Vault, so the plugin itself
will need a `main` function. Use the `ServeMultiplex` function within
`sdk/database/dbplugin/v5` to serve your multiplexed plugin.
Below is an example setup:
```go
package main
import (
"github.com/hashicorp/vault/api"
dbplugin "github.com/hashicorp/vault/sdk/database/dbplugin/v5"
)
func main() {
apiClientMeta := &api.PluginAPIClientMeta{}
flags := apiClientMeta.FlagSet()
flags.Parse(os.Args[1:])
err := Run()
if err != nil {
log.Println(err)
os.Exit(1)
}
}
func Run() error {
dbplugin.ServeMultiplex(dbType.(dbplugin.New))
return nil
}
func New() (interface{}, error) {
db, err := newDatabase()
if err != nil {
return nil, err
}
// This middleware isn't strictly required, but highly recommended to prevent accidentally exposing
// values such as passwords in error messages. An example of this is included below
db = dbplugin.NewDatabaseErrorSanitizerMiddleware(db, db.secretValues)
return db, nil
}
type MyDatabase struct {
// Variables for the database
password string
}
func newDatabase() (MyDatabase, error) {
// ...
db := &MyDatabase{
// ...
}
return db, nil
}
func (db *MyDatabase) secretValues() map[string]string {
return map[string]string{
db.password: "[password]",
}
}
```
Replacing `MyDatabase` with the actual implementation of your database plugin.
### Serving a plugin without multiplexing
Serving a plugin without multiplexing requires calling the `Serve` function
from `sdk/database/dbplugin/v5` to serve your plugin.
The setup is exactly the same as the multiplexed case above, except for the
`Run` function:
```go
func Run() error {
dbType, err := New()
if err != nil {
return err
}
dbplugin.Serve(dbType.(dbplugin.Database))
return nil
}
```
## Running your plugin
The above main package, once built, will supply you with a binary of your
plugin. We also recommend if you are planning on distributing your plugin to
build with [gox](https://github.com/mitchellh/gox) for cross platform builds.
To use your plugin with the database secrets engine you need to place the binary in the
plugin directory as specified in the [plugin internals](/vault/docs/plugins) docs.
You should now be able to register your plugin into the Vault catalog. To do
this your token will need sudo permissions.
```shell-session
$ vault write sys/plugins/catalog/database/mydatabase-database-plugin \
sha256="..." \
command="mydatabase"
Success! Data written to: sys/plugins/catalog/database/mydatabase-database-plugin
```
Now you should be able to configure your plugin like any other:
```shell-session
$ vault write database/config/mydatabase \
plugin_name=mydatabase-database-plugin \
allowed_roles="readonly" \
myplugins_connection_details="..."
```
## Updating database plugins to leverage plugin versioning
@include 'plugin-versioning.mdx'
In addition to the `Database` interface above, database plugins can then also
implement the
[`PluginVersioner`](https://github.com/hashicorp/vault/blob/sdk/v0.6.0/sdk/logical/logical.go#L150-L154)
interface:
```go
// PluginVersioner is an optional interface to return version info.
type PluginVersioner interface {
// PluginVersion returns the version for the backend
PluginVersion() PluginVersion
}
type PluginVersion struct {
Version string
}
```
## Upgrading database plugins to leverage plugin multiplexing
### Background
Scaling many external plugins can become resource intensive. To address
performance problems with scaling external plugins, database plugins can be
made to implement [plugin multiplexing](/vault/docs/plugins/plugin-architecture#plugin-multiplexing)
which allows a single plugin process to be used for multiple database
connections. To enable multiplexing, the plugin must be compiled with the
`ServeMultiplex` function call from Vault's `dbplugin` package.
### Upgrading your database plugin to leverage plugin multiplexing
There is only one step required to upgrade from a non-multiplexed to a
multiplexed database plugin: Change the `Serve` function call to `ServeMultiplex`.
This will run the RPC server for the plugin just as before. However, the
`ServeMultiplex` function takes the factory function directly as its argument.
This factory function is a function that returns an object that implements the
[`dbplugin.Database` interface](/vault/docs/secrets/databases/custom#plugin-interface).
### When should plugin multiplexing be avoided?
Some use cases that should avoid plugin multiplexing might include:
* Plugin process level separation is required
* Avoiding restart across all mounts/database connections for a plugin type on
crashes or plugin reload calls
## Upgrading database plugins to the v5 interface
### Background
In Vault 1.6, the database interface changed. The new version is referred to as version 5
and the previous version as version 4. This is due to prior versioning of the interface
that was not explicitly exposed.
The new interface was introduced for several reasons:
1. [Password policies](/vault/docs/concepts/password-policies) introduced in Vault 1.5 required
that Vault be responsible for generating passwords. In the prior version, the database
plugin was responsible for generating passwords. This prevented integration with
password policies.
2. Passwords needed to be generated by database plugins. This meant that plugin authors
were responsible for generating secure passwords. This should be done with a helper
function available within the Vault SDK, however there was nothing preventing an
author from generating insecure passwords.
3. There were a number of inconsistencies within the version 4 interface that made it
confusing for authors. For instance: passwords were handled in 3 different ways.
`CreateUser` generated a password and returned it, `SetCredentials` receives a password
via a configuration struct and returns it, and `RotateRootCredentials` generated a
password and was expected to return an updated copy of its entire configuration
with the new password.
4. The `SetCredentials` and `RotateRootCredentials` used for static credential rotation,
and rotating the root user's credentials respectively were essentially the same operation:
change a user's password. The only practical difference was which user it was referring
to. This was especially evident when `SetCredentials` was used when rotating root
credentials (unless static credential rotation wasn't supported by the plugin in question).
5. The old interface included both `Init` and `Initialize` adding to the confusion.
The new interface is roughly modeled after a [gRPC](https://grpc.io/) interface. It has improved
future compatibility by not requiring changes to the interface definition to add additional data
in the requests or responses. It also simplifies the interface by merging several into a single
function call.
### Upgrading your custom database
Vault 1.6 supports both version 4 and version 5 database plugins. The support for version 4
plugins will be removed in a future release. Version 5 database plugins will not function with
Vault prior to version 1.6. If you upgrade your database plugins, ensure that you are only using
Vault 1.6 or later. To determine if a plugin is using version 4 or version 5, the following is a
list of changes in no particular order that you can check against your plugin to determine
the version:
1. The import path for version 4 is `github.com/hashicorp/vault/sdk/database/dbplugin`
whereas the import path for version 5 is `github.com/hashicorp/vault/sdk/database/dbplugin/v5`
2. Version 4 has the following functions: `Initialize`, `Init`, `CreateUser`, `RenewUser`,
`RevokeUser`, `SetCredentials`, `RotateRootCredentials`, `Type`, and `Close`. You can see the
full function signatures in `sdk/database/dbplugin/plugin.go`.
3. Version 5 has the following functions: `Initialize`, `NewUser`, `UpdateUser`, `DeleteUser`,
`Type`, and `Close`. You can see the full function signatures in
`sdk/database/dbplugin/v5/database.go`.
If you are using a version 4 custom database plugin, the following are basic instructions
for upgrading to version 5.
-> In version 4, password generation was the responsibility of the plugin. This is no longer
the case with version 5. Vault is responsible for generating passwords and passing them to
the plugin via `NewUserRequest.Password` and `UpdateUserRequest.Password.NewPassword`.
1. Change the import path from `github.com/hashicorp/vault/sdk/database/dbplugin` to
`github.com/hashicorp/vault/sdk/database/dbplugin/v5`. The package name is the same, so any
references to `dbplugin` can remain as long as those symbols exist within the new package
(such as the `Serve` function).
2. An easy way to see what functions need to be implemented is to put the following as a
global variable within your package: `var _ dbplugin.Database = (*MyDatabase)(nil)`. This
will fail to compile if the `MyDatabase` type does not adhere to the `dbplugin.Database` interface.
3. Replace `Init` and `Initialize` with the new `Initialize` function definition. The fields that
`Init` was taking (`config` and `verifyConnection`) are now wrapped into `InitializeRequest`.
The returned `map[string]interface{}` object is now wrapped into `InitializeResponse`.
Only `Initialize` is needed to adhere to the `Database` interface.
4. Update `CreateUser` to `NewUser`. The `NewUserRequest` object contains the username and
password of the user to be created. It also includes a list of statements for creating the
user as well as several other fields that may or may not be applicable. Your custom plugin
should use the password provided in the request, not generate one. If you generate a password
instead, Vault will not know about it and will give the caller the wrong password.
5. `SetCredentials`, `RotateRootCredentials`, and `RenewUser` are combined into `UpdateUser`.
The request object, `UpdateUserRequest` contains three parts: the username to change, a
`ChangePassword` and a `ChangeExpiration` object. When one of the objects is not nil, this
indicates that particular field (password or expiration) needs to change. For instance, if
the `ChangePassword` field is not-nil, the user's password should be changed. This is
equivalent to calling `SetCredentials`. If the `ChangeExpiration` field is not-nil, the
user's expiration date should be changed. This is equivalent to calling `RenewUser`.
Many databases don't need to do anything with the updated expiration.
6. Update `RevokeUser` to `DeleteUser`. This is the simplest change. The username to be
deleted is enclosed in the `DeleteUserRequest` object. | vault | layout docs page title Custom Database Secrets Engines description The database secrets engine allows new functionality to be added through a plugin interface without needing to modify Vault s core code This allows you write your own code to generate credentials in any database you wish It also allows databases that require dynamically linked libraries to be used as plugins while keeping Vault itself statically linked Custom database secrets engines The interface for custom database plugins has changed in Vault 1 6 Vault will continue to recognize the now deprecated version of this interface for some time If you are using a plugin with the deprecated interface you should upgrade to the newest version See Upgrading database plugins upgrading database plugins for more details Advanced topic Plugin development is a highly advanced topic in Vault and is not required knowledge for day to day usage If you don t plan on writing any plugins feel free to skip this section of the documentation The database secrets engine allows new functionality to be added through a plugin interface without needing to modify Vault s core code This allows you write your own code to generate credentials in any database you wish It also allows databases that require dynamically linked libraries to be used as plugins while keeping Vault itself statically linked Please read the Plugins internals vault docs plugins docs for more information about the plugin system before getting started building your Database plugin Database plugins can be made to implement plugin multiplexing vault docs plugins plugin architecture plugin multiplexing which allows a single plugin process to be used for multiple database connections To enable multiplexing the plugin must be compiled with the ServeMultiplex function call from Vault s dbplugin package Plugin interface All plugins for the database secrets engine must implement the same interface This interface is found in sdk database dbplugin v5 database go go type Database interface Initialize the database plugin This is the equivalent of a constructor for the database object itself Initialize ctx context Context req InitializeRequest InitializeResponse error NewUser creates a new user within the database This user is temporary in that it will exist until the TTL expires NewUser ctx context Context req NewUserRequest NewUserResponse error UpdateUser updates an existing user within the database UpdateUser ctx context Context req UpdateUserRequest UpdateUserResponse error DeleteUser from the database This should not error if the user didn t exist prior to this call DeleteUser ctx context Context req DeleteUserRequest DeleteUserResponse error Type returns the Name for the particular database backend implementation This type name is usually set as a constant within the database backend implementation e g mysql for the MySQL database backend This is used for things like metrics and logging No behavior is switched on this Type string error Close attempts to close the underlying database connection that was established by the backend Close error Each of the request and response objects can also be found in sdk database dbplugin v5 database go In each of the requests you will see at least 1 Statements object in UpdateUserRequest they are in sub fields This object represents the set of commands to run for that particular operation For the NewUser function this is a set of commands to create the user and often set permissions for that user These statements are from the following fields in the API API Argument Request Object creation statements NewUserRequest Statements Commands revocation statements DeleteUserRequest Statements Commands rollback statements NewUserRequest RollbackStatements Commands renew statements UpdateUserRequest Expiration Statements Commands rotation statements UpdateUserRequest Password Statements Commands root rotation statements UpdateUserRequest Password Statements Commands In many of the built in plugins they replace or and or with the associated values It is up to your plugin to perform these string replacements There is a helper function located in sdk database helper dbutil called QueryHelper that assists in doing this string replacement You are not required to use it but it will make your plugin s behavior consistent with the built in plugins The InitializeRequest object contains a map of keys to values This data is what the user specified as the configuration for the plugin Your plugin should use this data to make connections to the database The response object contains a similar configuration map The response object should contain the configuration map that should be saved within Vault This allows the plugin to manipulate the configuration prior to saving it It is also passed a boolean value InitializeRequest VerifyConnection indicating if your plugin should initialize a connection to the database during the Initialize call This function is called when the configuration is written This allows the user to know whether the configuration is valid and able to connect to the database in question If this is set to false no connection should be made during the Initialize call but subsequent calls to the other functions will need to open a connection Serving a plugin Serving a plugin with multiplexing Plugin multiplexing requires github com hashicorp vault sdk v0 4 0 or above The plugin runs as a separate binary outside of Vault so the plugin itself will need a main function Use the ServeMultiplex function within sdk database dbplugin v5 to serve your multiplexed plugin Below is an example setup go package main import github com hashicorp vault api dbplugin github com hashicorp vault sdk database dbplugin v5 func main apiClientMeta api PluginAPIClientMeta flags apiClientMeta FlagSet flags Parse os Args 1 err Run if err nil log Println err os Exit 1 func Run error dbplugin ServeMultiplex dbType dbplugin New return nil func New interface error db err newDatabase if err nil return nil err This middleware isn t strictly required but highly recommended to prevent accidentally exposing values such as passwords in error messages An example of this is included below db dbplugin NewDatabaseErrorSanitizerMiddleware db db secretValues return db nil type MyDatabase struct Variables for the database password string func newDatabase MyDatabase error db MyDatabase return db nil func db MyDatabase secretValues map string string return map string string db password password Replacing MyDatabase with the actual implementation of your database plugin Serving a plugin without multiplexing Serving a plugin without multiplexing requires calling the Serve function from sdk database dbplugin v5 to serve your plugin The setup is exactly the same as the multiplexed case above except for the Run function go func Run error dbType err New if err nil return err dbplugin Serve dbType dbplugin Database return nil Running your plugin The above main package once built will supply you with a binary of your plugin We also recommend if you are planning on distributing your plugin to build with gox https github com mitchellh gox for cross platform builds To use your plugin with the database secrets engine you need to place the binary in the plugin directory as specified in the plugin internals vault docs plugins docs You should now be able to register your plugin into the Vault catalog To do this your token will need sudo permissions shell session vault write sys plugins catalog database mydatabase database plugin sha256 command mydatabase Success Data written to sys plugins catalog database mydatabase database plugin Now you should be able to configure your plugin like any other shell session vault write database config mydatabase plugin name mydatabase database plugin allowed roles readonly myplugins connection details Updating database plugins to leverage plugin versioning include plugin versioning mdx In addition to the Database interface above database plugins can then also implement the PluginVersioner https github com hashicorp vault blob sdk v0 6 0 sdk logical logical go L150 L154 interface go PluginVersioner is an optional interface to return version info type PluginVersioner interface PluginVersion returns the version for the backend PluginVersion PluginVersion type PluginVersion struct Version string Upgrading database plugins to leverage plugin multiplexing Background Scaling many external plugins can become resource intensive To address performance problems with scaling external plugins database plugins can be made to implement plugin multiplexing vault docs plugins plugin architecture plugin multiplexing which allows a single plugin process to be used for multiple database connections To enable multiplexing the plugin must be compiled with the ServeMultiplex function call from Vault s dbplugin package Upgrading your database plugin to leverage plugin multiplexing There is only one step required to upgrade from a non multiplexed to a multiplexed database plugin Change the Serve function call to ServeMultiplex This will run the RPC server for the plugin just as before However the ServeMultiplex function takes the factory function directly as its argument This factory function is a function that returns an object that implements the dbplugin Database interface vault docs secrets databases custom plugin interface When should plugin multiplexing be avoided Some use cases that should avoid plugin multiplexing might include Plugin process level separation is required Avoiding restart across all mounts database connections for a plugin type on crashes or plugin reload calls Upgrading database plugins to the v5 interface Background In Vault 1 6 the database interface changed The new version is referred to as version 5 and the previous version as version 4 This is due to prior versioning of the interface that was not explicitly exposed The new interface was introduced for several reasons 1 Password policies vault docs concepts password policies introduced in Vault 1 5 required that Vault be responsible for generating passwords In the prior version the database plugin was responsible for generating passwords This prevented integration with password policies 2 Passwords needed to be generated by database plugins This meant that plugin authors were responsible for generating secure passwords This should be done with a helper function available within the Vault SDK however there was nothing preventing an author from generating insecure passwords 3 There were a number of inconsistencies within the version 4 interface that made it confusing for authors For instance passwords were handled in 3 different ways CreateUser generated a password and returned it SetCredentials receives a password via a configuration struct and returns it and RotateRootCredentials generated a password and was expected to return an updated copy of its entire configuration with the new password 4 The SetCredentials and RotateRootCredentials used for static credential rotation and rotating the root user s credentials respectively were essentially the same operation change a user s password The only practical difference was which user it was referring to This was especially evident when SetCredentials was used when rotating root credentials unless static credential rotation wasn t supported by the plugin in question 5 The old interface included both Init and Initialize adding to the confusion The new interface is roughly modeled after a gRPC https grpc io interface It has improved future compatibility by not requiring changes to the interface definition to add additional data in the requests or responses It also simplifies the interface by merging several into a single function call Upgrading your custom database Vault 1 6 supports both version 4 and version 5 database plugins The support for version 4 plugins will be removed in a future release Version 5 database plugins will not function with Vault prior to version 1 6 If you upgrade your database plugins ensure that you are only using Vault 1 6 or later To determine if a plugin is using version 4 or version 5 the following is a list of changes in no particular order that you can check against your plugin to determine the version 1 The import path for version 4 is github com hashicorp vault sdk database dbplugin whereas the import path for version 5 is github com hashicorp vault sdk database dbplugin v5 2 Version 4 has the following functions Initialize Init CreateUser RenewUser RevokeUser SetCredentials RotateRootCredentials Type and Close You can see the full function signatures in sdk database dbplugin plugin go 3 Version 5 has the following functions Initialize NewUser UpdateUser DeleteUser Type and Close You can see the full function signatures in sdk database dbplugin v5 database go If you are using a version 4 custom database plugin the following are basic instructions for upgrading to version 5 In version 4 password generation was the responsibility of the plugin This is no longer the case with version 5 Vault is responsible for generating passwords and passing them to the plugin via NewUserRequest Password and UpdateUserRequest Password NewPassword 1 Change the import path from github com hashicorp vault sdk database dbplugin to github com hashicorp vault sdk database dbplugin v5 The package name is the same so any references to dbplugin can remain as long as those symbols exist within the new package such as the Serve function 2 An easy way to see what functions need to be implemented is to put the following as a global variable within your package var dbplugin Database MyDatabase nil This will fail to compile if the MyDatabase type does not adhere to the dbplugin Database interface 3 Replace Init and Initialize with the new Initialize function definition The fields that Init was taking config and verifyConnection are now wrapped into InitializeRequest The returned map string interface object is now wrapped into InitializeResponse Only Initialize is needed to adhere to the Database interface 4 Update CreateUser to NewUser The NewUserRequest object contains the username and password of the user to be created It also includes a list of statements for creating the user as well as several other fields that may or may not be applicable Your custom plugin should use the password provided in the request not generate one If you generate a password instead Vault will not know about it and will give the caller the wrong password 5 SetCredentials RotateRootCredentials and RenewUser are combined into UpdateUser The request object UpdateUserRequest contains three parts the username to change a ChangePassword and a ChangeExpiration object When one of the objects is not nil this indicates that particular field password or expiration needs to change For instance if the ChangePassword field is not nil the user s password should be changed This is equivalent to calling SetCredentials If the ChangeExpiration field is not nil the user s expiration date should be changed This is equivalent to calling RenewUser Many databases don t need to do anything with the updated expiration 6 Update RevokeUser to DeleteUser This is the simplest change The username to be deleted is enclosed in the DeleteUserRequest object |
vault page title Snowflake Database Secrets Engines Snowflake database secrets engine layout docs roles for Snowflake hosted databases This plugin generates database credentials dynamically based on configured Snowflake is one of the supported plugins for the database secrets engine | ---
layout: docs
page_title: Snowflake - Database - Secrets Engines
description: |-
Snowflake is one of the supported plugins for the database secrets engine.
This plugin generates database credentials dynamically based on configured
roles for Snowflake hosted databases.
---
# Snowflake database secrets engine
Snowflake is one of the supported plugins for the database secrets engine. This plugin
generates database credentials dynamically based on configured roles for Snowflake-hosted
databases and supports [Static Roles](/vault/docs/secrets/databases#static-roles).
See the [database secrets engine](/vault/docs/secrets/databases) docs for
more information about setting up the database secrets engine.
The Snowflake database secrets engine uses
[gosnowflake](https://pkg.go.dev/github.com/snowflakedb/gosnowflake).
## Capabilities
| Plugin Name | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization | Credential Types |
| --------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |---------------------------|
| `snowflake-database-plugin` | Yes | Yes | Yes | Yes (1.8+) | password, rsa_private_key |
## Setup
1. Enable the database secrets engine if it is not already enabled:
```shell-session
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure Vault with the proper plugin and connection information:
```shell-session
$ vault write database/config/my-snowflake-database \
plugin_name=snowflake-database-plugin \
allowed_roles="my-role" \
connection_url=":@ecxxxx.west-us-1.azure/db_name" \
username="vaultuser" \
password="vaultpass"
```
A properly formatted data source name (DSN) needs to be provided during configuration of the
database. This DSN is typically formatted with the following options:
```shell-session
:@account/db_name
```
`` and `` will typically be used as is during configuration. The
special formatting is replaced by the username and password options passed to the configuration
for initial connection.
`account` is your Snowflake account identifier. You can find out more about this value by reading
the `server` section of
[this document](https://docs.snowflake.com/en/user-guide/odbc-parameters.html#connection-parameters).
`db_name` is the name of a database in your Snowflake instance.
~> **Note:** The user being utilized should have `ACCOUNT_ADMIN` privileges, and should be different
from the root user you were provided when making your Snowflake account. This allows you to rotate
the root credentials and still be able to access your account.
## Usage
After the secrets engine is configured, configure dynamic and static roles to
enable generating credentials.
### Dynamic roles
#### Password credentials
1. Configure a role that creates new Snowflake users with password credentials:
```shell-session
$ vault write database/roles/my-password-role \
db_name=my-snowflake-database \
creation_statements="CREATE USER PASSWORD = ''
DAYS_TO_EXPIRY = DEFAULT_ROLE=myrole;
GRANT ROLE myrole TO USER ;" \
default_ttl="1h" \
max_ttl="24h"
Success! Data written to: database/roles/my-password-role
```
1. Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```shell-session
$ vault read database/creds/my-password-role
Key Value
--- -----
lease_id database/creds/my-password-role/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6
lease_duration 1h
lease_renewable true
password SsnoaA-8Tv4t34f41baD
username v_root_my_password_role_fU0jqEy4wMNoAY2h60yd_1610561532
```
#### Key pair credentials
1. Configure a role that creates new Snowflake users with key pair credentials:
```shell-session
$ vault write database/roles/my-keypair-role \
db_name=my-snowflake-database \
creation_statements="CREATE USER RSA_PUBLIC_KEY=''
DAYS_TO_EXPIRY = DEFAULT_ROLE=myrole;
GRANT ROLE myrole TO USER ;" \
credential_type="rsa_private_key" \
credential_config=key_bits=2048 \
default_ttl="1h" \
max_ttl="24h"
Success! Data written to: database/roles/my-keypair-role
```
1. Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```shell-session
$ vault read database/creds/my-keypair-role
Key Value
--- -----
lease_id database/creds/my-keypair-role/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6
lease_duration 1h
lease_renewable true
rsa_private_key -----BEGIN PRIVATE KEY-----
...
-----END PRIVATE KEY-----
username v_token_my_keypair_role_n20WjS9U3LWTlBWn4Wbh_1654718170
```
You can directly use the PEM-encoded `rsa_private_key` value to establish a connection
to Snowflake. See [connection options](https://docs.snowflake.com/en/user-guide/key-pair-auth.html#step-6-configure-the-snowflake-client-to-use-key-pair-authentication)
for a list of clients and instructions for establishing a connection using key pair
authentication.
### Static roles
#### Password credentials
1. Configure a static role that rotates the password credential for an existing Snowflake user.
```shell-session
$ vault write database/static-roles/my-password-role \
db_name=my-snowflake-database \
username="snowflake_existing_user" \
rotation_period="24h" \
rotation_statements="ALTER USER SET PASSWORD = ''"
Success! Data written to: database/static-roles/my-password-role
```
1. Retrieve the current password credential from the `/static-creds` endpoint:
```shell-session
$ vault read database/static-creds/my-password-role
Key Value
--- -----
last_vault_rotation 2020-08-07T16:50:48.393354+01:00
password Z4-KH8F-VK5VJc0hSkXQ
rotation_period 24h
ttl 23h59m39s
username my-existing-couchbase-user
```
#### Key pair credentials
1. Configure a static role that rotates the key pair credential for an existing Snowflake user:
```shell-session
$ vault write database/static-roles/my-keypair-role \
db_name=my-snowflake-database \
username="snowflake_existing_user" \
rotation_period="24h" \
rotation_statements="ALTER USER SET RSA_PUBLIC_KEY=''" \
credential_type="rsa_private_key" \
credential_config=key_bits=2048
Success! Data written to: database/static-roles/my-keypair-role
```
1. Retrieve the current key pair credential from the `/static-creds` endpoint:
```shell-session
$ vault read database/static-creds/my-keypair-role
Key Value
--- -----
last_vault_rotation 2022-06-08T13:13:02.355928-07:00
rotation_period 24h
rsa_private_key -----BEGIN PRIVATE KEY-----
...
-----END PRIVATE KEY-----
ttl 23h59m55s
username snowflake_existing_user
```
You can directly use the PEM-encoded `rsa_private_key` value to establish a connection
to Snowflake. See [connection options](https://docs.snowflake.com/en/user-guide/key-pair-auth.html#step-6-configure-the-snowflake-client-to-use-key-pair-authentication)
for a list of clients and instructions for establishing a connection using key pair
authentication.
## Key pair authentication
Snowflake supports using [key pair authentication](https://docs.snowflake.com/en/user-guide/key-pair-auth.html)
for enhanced authentication security as an alternative to username and password authentication.
The Snowflake database plugin can be used to manage key pair credentials for Snowflake users
by using the `rsa_private_key` [credential_type](/vault/api-docs/secret/databases#credential_type).
See the [usage](/vault/docs/secrets/databases/snowflake#usage) section for examples using both
dynamic and static roles.
## API
The full list of configurable options can be seen in the [Snowflake database
plugin API](/vault/api-docs/secret/databases/snowflake) page.
For more information on the database secrets engine's HTTP API please see the
[Database secrets engine API](/vault/api-docs/secret/databases) page. | vault | layout docs page title Snowflake Database Secrets Engines description Snowflake is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for Snowflake hosted databases Snowflake database secrets engine Snowflake is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for Snowflake hosted databases and supports Static Roles vault docs secrets databases static roles See the database secrets engine vault docs secrets databases docs for more information about setting up the database secrets engine The Snowflake database secrets engine uses gosnowflake https pkg go dev github com snowflakedb gosnowflake Capabilities Plugin Name Root Credential Rotation Dynamic Roles Static Roles Username Customization Credential Types snowflake database plugin Yes Yes Yes Yes 1 8 password rsa private key Setup 1 Enable the database secrets engine if it is not already enabled shell session vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure Vault with the proper plugin and connection information shell session vault write database config my snowflake database plugin name snowflake database plugin allowed roles my role connection url ecxxxx west us 1 azure db name username vaultuser password vaultpass A properly formatted data source name DSN needs to be provided during configuration of the database This DSN is typically formatted with the following options shell session account db name and will typically be used as is during configuration The special formatting is replaced by the username and password options passed to the configuration for initial connection account is your Snowflake account identifier You can find out more about this value by reading the server section of this document https docs snowflake com en user guide odbc parameters html connection parameters db name is the name of a database in your Snowflake instance Note The user being utilized should have ACCOUNT ADMIN privileges and should be different from the root user you were provided when making your Snowflake account This allows you to rotate the root credentials and still be able to access your account Usage After the secrets engine is configured configure dynamic and static roles to enable generating credentials Dynamic roles Password credentials 1 Configure a role that creates new Snowflake users with password credentials shell session vault write database roles my password role db name my snowflake database creation statements CREATE USER PASSWORD DAYS TO EXPIRY DEFAULT ROLE myrole GRANT ROLE myrole TO USER default ttl 1h max ttl 24h Success Data written to database roles my password role 1 Generate a new credential by reading from the creds endpoint with the name of the role shell session vault read database creds my password role Key Value lease id database creds my password role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6 lease duration 1h lease renewable true password SsnoaA 8Tv4t34f41baD username v root my password role fU0jqEy4wMNoAY2h60yd 1610561532 Key pair credentials 1 Configure a role that creates new Snowflake users with key pair credentials shell session vault write database roles my keypair role db name my snowflake database creation statements CREATE USER RSA PUBLIC KEY DAYS TO EXPIRY DEFAULT ROLE myrole GRANT ROLE myrole TO USER credential type rsa private key credential config key bits 2048 default ttl 1h max ttl 24h Success Data written to database roles my keypair role 1 Generate a new credential by reading from the creds endpoint with the name of the role shell session vault read database creds my keypair role Key Value lease id database creds my keypair role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6 lease duration 1h lease renewable true rsa private key BEGIN PRIVATE KEY END PRIVATE KEY username v token my keypair role n20WjS9U3LWTlBWn4Wbh 1654718170 You can directly use the PEM encoded rsa private key value to establish a connection to Snowflake See connection options https docs snowflake com en user guide key pair auth html step 6 configure the snowflake client to use key pair authentication for a list of clients and instructions for establishing a connection using key pair authentication Static roles Password credentials 1 Configure a static role that rotates the password credential for an existing Snowflake user shell session vault write database static roles my password role db name my snowflake database username snowflake existing user rotation period 24h rotation statements ALTER USER SET PASSWORD Success Data written to database static roles my password role 1 Retrieve the current password credential from the static creds endpoint shell session vault read database static creds my password role Key Value last vault rotation 2020 08 07T16 50 48 393354 01 00 password Z4 KH8F VK5VJc0hSkXQ rotation period 24h ttl 23h59m39s username my existing couchbase user Key pair credentials 1 Configure a static role that rotates the key pair credential for an existing Snowflake user shell session vault write database static roles my keypair role db name my snowflake database username snowflake existing user rotation period 24h rotation statements ALTER USER SET RSA PUBLIC KEY credential type rsa private key credential config key bits 2048 Success Data written to database static roles my keypair role 1 Retrieve the current key pair credential from the static creds endpoint shell session vault read database static creds my keypair role Key Value last vault rotation 2022 06 08T13 13 02 355928 07 00 rotation period 24h rsa private key BEGIN PRIVATE KEY END PRIVATE KEY ttl 23h59m55s username snowflake existing user You can directly use the PEM encoded rsa private key value to establish a connection to Snowflake See connection options https docs snowflake com en user guide key pair auth html step 6 configure the snowflake client to use key pair authentication for a list of clients and instructions for establishing a connection using key pair authentication Key pair authentication Snowflake supports using key pair authentication https docs snowflake com en user guide key pair auth html for enhanced authentication security as an alternative to username and password authentication The Snowflake database plugin can be used to manage key pair credentials for Snowflake users by using the rsa private key credential type vault api docs secret databases credential type See the usage vault docs secrets databases snowflake usage section for examples using both dynamic and static roles API The full list of configurable options can be seen in the Snowflake database plugin API vault api docs secret databases snowflake page For more information on the database secrets engine s HTTP API please see the Database secrets engine API vault api docs secret databases page |
vault plugin generates database credentials dynamically based on configured roles Elasticsearch is one of the supported plugins for the database secrets engine for Elasticsearch This layout docs page title Elasticsearch Database Secrets Engines | ---
layout: docs
page_title: Elasticsearch - Database - Secrets Engines
description: >-
Elasticsearch is one of the supported plugins for the database secrets engine.
This
plugin generates database credentials dynamically based on configured roles
for Elasticsearch.
---
# Elasticsearch database secrets engine
@include 'x509-sha1-deprecation.mdx'
Elasticsearch is one of the supported plugins for the database secrets engine. This
plugin generates database credentials dynamically based on configured roles for
Elasticsearch.
See the [database secrets engine](/vault/docs/secrets/databases) docs for
more information about setting up the database secrets engine.
## Capabilities
| Plugin Name | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |
| ------------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |
| `elasticsearch-database-plugin` | Yes | Yes | Yes (1.6+) | Yes (1.8+) |
## Getting started
To take advantage of this plugin, you must first enable Elasticsearch's native realm of security by activating X-Pack. These
instructions will walk you through doing this using Elasticsearch 7.1.1.
### Enable X-Pack security in elasticsearch
Read [Securing the Elastic Stack](https://www.elastic.co/guide/en/elastic-stack-overview/7.1/elasticsearch-security.html) and
follow [its instructions for enabling X-Pack Security](https://www.elastic.co/guide/en/elasticsearch/reference/7.1/setup-xpack.html).
### Enable encrypted communications
This plugin communicates with Elasticsearch's security API. ES requires TLS for these communications so they can be
encrypted.
To set up TLS in Elasticsearch, first read [encrypted communications](https://www.elastic.co/guide/en/elastic-stack-overview/7.1/encrypting-communications.html)
and go through its instructions on [encrypting HTTP client communications](https://www.elastic.co/guide/en/elasticsearch/reference/7.1/configuring-tls.html#tls-http).
After enabling TLS on the Elasticsearch side, you'll need to convert the .p12 certificates you generated to other formats so they can be
used by Vault. [Here is an example using OpenSSL](https://stackoverflow.com/questions/15144046/converting-pkcs12-certificate-into-pem-using-openssl)
to convert our .p12 certs to the pem format.
Also, on the instance running Elasticsearch, we needed to install our newly generated CA certificate that was originally in the .p12 format.
We did this by converting the .p12 CA cert to a pem, and then further converting that
[pem to a crt](https://stackoverflow.com/questions/13732826/convert-pem-to-crt-and-key), adding that crt to `/usr/share/ca-certificates/extra`,
and using `sudo dpkg-reconfigure ca-certificates`.
The above instructions may vary if you are not using an Ubuntu machine. Please ensure you're using the methods specific to your operating
environment. Describing every operating environment is outside the scope of these instructions.
### Set up passwords
When done, verify that you've enabled X-Pack by running `$ $ES_HOME/bin/elasticsearch-setup-passwords interactive`. You'll
know it's been set up successfully if it takes you through a number of password-inputting steps.
### Create a role for Vault
Next, in Elasticsearch, we recommend that you create a user just for Vault to use in managing secrets.
To do this, first create a role that will allow Vault the minimum privileges needed to administer users and passwords by performing a
POST to Elasticsearch. To do this, we used the `elastic` vaultuser whose password we created in the
`$ $ES_HOME/bin/elasticsearch-setup-passwords interactive` step.
```shell-session
$ curl \
-X POST \
-H "Content-Type: application/json" \
-d '{"cluster": ["manage_security"]}' \
http://vaultuser:$PASSWORD@localhost:9200/_xpack/security/role/vault
```
Next, create a user for Vault associated with that role.
```shell-session
$ curl \
-X POST \
-H "Content-Type: application/json" \
-d @data.json \
http://vaultuser:$PASSWORD@localhost:9200/_xpack/security/user/vault
```
The contents of `data.json` in this example are:
```json
{
"password" : "myPa55word",
"roles" : [ "vault" ],
"full_name" : "Hashicorp Vault",
"metadata" : {
"plugin_name": "Vault Plugin Database Elasticsearch",
"plugin_url": "https://github.com/hashicorp/vault-plugin-database-elasticsearch"
}
}
```
Now, Elasticsearch is configured and ready to be used with Vault.
## Setup
1. Enable the database secrets engine if it is not already enabled:
```shell-session
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure Vault with the proper plugin and connection information:
```shell-session
$ vault write database/config/my-elasticsearch-database \
plugin_name="elasticsearch-database-plugin" \
allowed_roles="internally-defined-role,externally-defined-role" \
username=vault \
password=myPa55word \
url=http://localhost:9200 \
ca_cert=/usr/share/ca-certificates/extra/elastic-stack-ca.crt.pem \
client_cert=$ES_HOME/config/certs/elastic-certificates.crt.pem \
client_key=$ES_HOME/config/certs/elastic-certificates.key.pem
```
## Usage
After the secrets engine is configured, configure dynamic and static roles to enable generating credentials.
### Dynamic Roles
Dynamic roles generate new credentials for every request.
1. Configure a role that maps a name in Vault to a role definition in Elasticsearch.
This is considered the most secure type of role because nobody can perform
a privilege escalation by editing a role's privileges out-of-band in
Elasticsearch:
```shell-session
$ vault write database/roles/internally-defined-role \
db_name=my-elasticsearch-database \
creation_statements='{"elasticsearch_role_definition": {"indices": [{"names":["*"], "privileges":["read"]}]}}' \
default_ttl="1h" \
max_ttl="24h"
```
1. Alternatively, configure a role that maps a name in Vault to a pre-existing
role definition in Elasticsearch:
```shell-session
$ vault write database/roles/externally-defined-role \
db_name=my-elasticsearch-database \
creation_statements='{"elasticsearch_roles": ["pre-existing-role-in-elasticsearch"]}' \
default_ttl="1h" \
max_ttl="24h"
```
1. Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```shell-session
$ vault read database/creds/my-role
Key Value
--- -----
lease_id database/creds/my-role/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6
lease_duration 1h
lease_renewable true
password 0ZsueAP-dqCNGZo35M0n
username v-vaultuser-my-role-AgIViC5TdQHBdeiCxae0-1602541724
```
### Static Roles
Static roles return the same credentials for every request. The credentials are rotated based on the schedule provided.
1. Configure a static role that maps a name in Vault to a pre-existing user in Elasticsearch:
```shell-session
$ vault write database/static-roles/my-static-role\
db_name=my-elasticsearch-database \
username=my-existing-elasticsearch-uername \
rotation_period="24h"
```
1. Retrieve the current username and password from the `/static-creds` endpoint:
```shell-session
$ vault read database/static-creds/my-static-role
Key Value
--- -----
last_vault_rotation 2023-09-14T08:24:39.650491913-04:00
password current-password
rotation_period 24h
ttl 23h59m59s
username my-existing-elasticsearch-uername
```
## API
The full list of configurable options can be seen in the [Elasticsearch database plugin API](/vault/api-docs/secret/databases/elasticdb) page.
For more information on the database secrets engine's HTTP API please see the
[Database secrets engine API](/vault/api-docs/secret/databases) page. | vault | layout docs page title Elasticsearch Database Secrets Engines description Elasticsearch is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for Elasticsearch Elasticsearch database secrets engine include x509 sha1 deprecation mdx Elasticsearch is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for Elasticsearch See the database secrets engine vault docs secrets databases docs for more information about setting up the database secrets engine Capabilities Plugin Name Root Credential Rotation Dynamic Roles Static Roles Username Customization elasticsearch database plugin Yes Yes Yes 1 6 Yes 1 8 Getting started To take advantage of this plugin you must first enable Elasticsearch s native realm of security by activating X Pack These instructions will walk you through doing this using Elasticsearch 7 1 1 Enable X Pack security in elasticsearch Read Securing the Elastic Stack https www elastic co guide en elastic stack overview 7 1 elasticsearch security html and follow its instructions for enabling X Pack Security https www elastic co guide en elasticsearch reference 7 1 setup xpack html Enable encrypted communications This plugin communicates with Elasticsearch s security API ES requires TLS for these communications so they can be encrypted To set up TLS in Elasticsearch first read encrypted communications https www elastic co guide en elastic stack overview 7 1 encrypting communications html and go through its instructions on encrypting HTTP client communications https www elastic co guide en elasticsearch reference 7 1 configuring tls html tls http After enabling TLS on the Elasticsearch side you ll need to convert the p12 certificates you generated to other formats so they can be used by Vault Here is an example using OpenSSL https stackoverflow com questions 15144046 converting pkcs12 certificate into pem using openssl to convert our p12 certs to the pem format Also on the instance running Elasticsearch we needed to install our newly generated CA certificate that was originally in the p12 format We did this by converting the p12 CA cert to a pem and then further converting that pem to a crt https stackoverflow com questions 13732826 convert pem to crt and key adding that crt to usr share ca certificates extra and using sudo dpkg reconfigure ca certificates The above instructions may vary if you are not using an Ubuntu machine Please ensure you re using the methods specific to your operating environment Describing every operating environment is outside the scope of these instructions Set up passwords When done verify that you ve enabled X Pack by running ES HOME bin elasticsearch setup passwords interactive You ll know it s been set up successfully if it takes you through a number of password inputting steps Create a role for Vault Next in Elasticsearch we recommend that you create a user just for Vault to use in managing secrets To do this first create a role that will allow Vault the minimum privileges needed to administer users and passwords by performing a POST to Elasticsearch To do this we used the elastic vaultuser whose password we created in the ES HOME bin elasticsearch setup passwords interactive step shell session curl X POST H Content Type application json d cluster manage security http vaultuser PASSWORD localhost 9200 xpack security role vault Next create a user for Vault associated with that role shell session curl X POST H Content Type application json d data json http vaultuser PASSWORD localhost 9200 xpack security user vault The contents of data json in this example are json password myPa55word roles vault full name Hashicorp Vault metadata plugin name Vault Plugin Database Elasticsearch plugin url https github com hashicorp vault plugin database elasticsearch Now Elasticsearch is configured and ready to be used with Vault Setup 1 Enable the database secrets engine if it is not already enabled shell session vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure Vault with the proper plugin and connection information shell session vault write database config my elasticsearch database plugin name elasticsearch database plugin allowed roles internally defined role externally defined role username vault password myPa55word url http localhost 9200 ca cert usr share ca certificates extra elastic stack ca crt pem client cert ES HOME config certs elastic certificates crt pem client key ES HOME config certs elastic certificates key pem Usage After the secrets engine is configured configure dynamic and static roles to enable generating credentials Dynamic Roles Dynamic roles generate new credentials for every request 1 Configure a role that maps a name in Vault to a role definition in Elasticsearch This is considered the most secure type of role because nobody can perform a privilege escalation by editing a role s privileges out of band in Elasticsearch shell session vault write database roles internally defined role db name my elasticsearch database creation statements elasticsearch role definition indices names privileges read default ttl 1h max ttl 24h 1 Alternatively configure a role that maps a name in Vault to a pre existing role definition in Elasticsearch shell session vault write database roles externally defined role db name my elasticsearch database creation statements elasticsearch roles pre existing role in elasticsearch default ttl 1h max ttl 24h 1 Generate a new credential by reading from the creds endpoint with the name of the role shell session vault read database creds my role Key Value lease id database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6 lease duration 1h lease renewable true password 0ZsueAP dqCNGZo35M0n username v vaultuser my role AgIViC5TdQHBdeiCxae0 1602541724 Static Roles Static roles return the same credentials for every request The credentials are rotated based on the schedule provided 1 Configure a static role that maps a name in Vault to a pre existing user in Elasticsearch shell session vault write database static roles my static role db name my elasticsearch database username my existing elasticsearch uername rotation period 24h 1 Retrieve the current username and password from the static creds endpoint shell session vault read database static creds my static role Key Value last vault rotation 2023 09 14T08 24 39 650491913 04 00 password current password rotation period 24h ttl 23h59m59s username my existing elasticsearch uername API The full list of configurable options can be seen in the Elasticsearch database plugin API vault api docs secret databases elasticdb page For more information on the database secrets engine s HTTP API please see the Database secrets engine API vault api docs secret databases page |
vault plugin generates database credentials dynamically based on configured roles MongoDB database secrets engine for the MongoDB database MongoDB is one of the supported plugins for the database secrets engine This layout docs page title MongoDB Database Secrets Engines | ---
layout: docs
page_title: MongoDB - Database - Secrets Engines
description: |-
MongoDB is one of the supported plugins for the database secrets engine. This
plugin generates database credentials dynamically based on configured roles
for the MongoDB database.
---
# MongoDB database secrets engine
@include 'x509-sha1-deprecation.mdx'
MongoDB is one of the supported plugins for the database secrets engine. This
plugin generates database credentials dynamically based on configured roles for
the MongoDB database and also supports
[Static Roles](/vault/docs/secrets/databases#static-roles).
See the [database secrets engine](/vault/docs/secrets/databases) docs for
more information about setting up the database secrets engine.
## Capabilities
| Plugin Name | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |
| ------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |
| `mongodb-database-plugin` | Yes | Yes | Yes | Yes (1.7+) |
## Setup
1. Enable the database secrets engine if it is not already enabled:
```text
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure Vault with the proper plugin and connection information:
```text
$ vault write database/config/my-mongodb-database \
plugin_name=mongodb-database-plugin \
allowed_roles="my-role" \
connection_url="mongodb://:@mongodb.acme.com:27017/admin?tls=true" \
username="vaultuser" \
password="vaultpass!"
```
1. Configure a role that maps a name in Vault to a MongoDB command that executes and
creates the database credential:
```text
$ vault write database/roles/my-role \
db_name=my-mongodb-database \
creation_statements='{ "db": "admin", "roles": [{ "role": "readWrite" }, {"role": "read", "db": "foo"}] }' \
default_ttl="1h" \
max_ttl="24h"
Success! Data written to: database/roles/my-role
```
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can generate credentials.
1. Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```text
$ vault read database/creds/my-role
Key Value
--- -----
lease_id database/creds/my-role/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6
lease_duration 1h
lease_renewable true
password LEm-lcDJ2k0Hi05FvizN
username v-vaultuser-my-role-ItceCZHlp0YGn90Puy9Z-1602542024
```
## Client x509 certificate authentication
This plugin supports using MongoDB's [x509 Client-side Certificate Authentication](https://docs.mongodb.com/manual/core/security-x.509/)
To use this authentication mechanism, configure the plugin:
```shell-session
$ vault write database/config/my-mongodb-database \
plugin_name=mongodb-database-plugin \
allowed_roles="my-role" \
connection_url="mongodb://@mongodb.acme.com:27017/admin" \
tls_certificate_key=@/path/to/client.pem \
tls_ca=@/path/to/client.ca
```
Note: `tls_certificate_key` and `tls_ca` map to [`tlsCertificateKeyFile`](https://docs.mongodb.com/manual/reference/program/mongo/#cmdoption-mongo-tlscertificatekeyfile)
and [`tlsCAFile`](https://docs.mongodb.com/manual/reference/program/mongo/#cmdoption-mongo-tlscafile) configuration options
from MongoDB with the exception that the Vault parameters are the contents of those files, not filenames. As such,
the two options are independent of each other. See the [MongoDB Configuration Options](https://docs.mongodb.com/manual/reference/program/mongo/)
for more information.
## Tutorial
Refer to [Database Secrets Engine tutorial](/vault/tutorials/db-credentials/database-secrets) for a
step-by-step example of using the database secrets engine.
## API
The full list of configurable options can be seen in the [MongoDB database
plugin API](/vault/api-docs/secret/databases/mongodb) page.
For more information on the database secrets engine's HTTP API please see the
[Database secrets engine API](/vault/api-docs/secret/databases) page. | vault | layout docs page title MongoDB Database Secrets Engines description MongoDB is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for the MongoDB database MongoDB database secrets engine include x509 sha1 deprecation mdx MongoDB is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for the MongoDB database and also supports Static Roles vault docs secrets databases static roles See the database secrets engine vault docs secrets databases docs for more information about setting up the database secrets engine Capabilities Plugin Name Root Credential Rotation Dynamic Roles Static Roles Username Customization mongodb database plugin Yes Yes Yes Yes 1 7 Setup 1 Enable the database secrets engine if it is not already enabled text vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure Vault with the proper plugin and connection information text vault write database config my mongodb database plugin name mongodb database plugin allowed roles my role connection url mongodb mongodb acme com 27017 admin tls true username vaultuser password vaultpass 1 Configure a role that maps a name in Vault to a MongoDB command that executes and creates the database credential text vault write database roles my role db name my mongodb database creation statements db admin roles role readWrite role read db foo default ttl 1h max ttl 24h Success Data written to database roles my role Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can generate credentials 1 Generate a new credential by reading from the creds endpoint with the name of the role text vault read database creds my role Key Value lease id database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6 lease duration 1h lease renewable true password LEm lcDJ2k0Hi05FvizN username v vaultuser my role ItceCZHlp0YGn90Puy9Z 1602542024 Client x509 certificate authentication This plugin supports using MongoDB s x509 Client side Certificate Authentication https docs mongodb com manual core security x 509 To use this authentication mechanism configure the plugin shell session vault write database config my mongodb database plugin name mongodb database plugin allowed roles my role connection url mongodb mongodb acme com 27017 admin tls certificate key path to client pem tls ca path to client ca Note tls certificate key and tls ca map to tlsCertificateKeyFile https docs mongodb com manual reference program mongo cmdoption mongo tlscertificatekeyfile and tlsCAFile https docs mongodb com manual reference program mongo cmdoption mongo tlscafile configuration options from MongoDB with the exception that the Vault parameters are the contents of those files not filenames As such the two options are independent of each other See the MongoDB Configuration Options https docs mongodb com manual reference program mongo for more information Tutorial Refer to Database Secrets Engine tutorial vault tutorials db credentials database secrets for a step by step example of using the database secrets engine API The full list of configurable options can be seen in the MongoDB database plugin API vault api docs secret databases mongodb page For more information on the database secrets engine s HTTP API please see the Database secrets engine API vault api docs secret databases page |
vault Couchbase is one of the supported plugins for the database secrets engine Couchbase database secrets engine layout docs This plugin generates database credentials dynamically based on configured page title Couchbase Database Secrets Engines roles for the Couchbase database | ---
layout: docs
page_title: Couchbase - Database - Secrets Engines
description: |-
Couchbase is one of the supported plugins for the database secrets engine.
This plugin generates database credentials dynamically based on configured
roles for the Couchbase database.
---
# Couchbase database secrets engine
@include 'x509-sha1-deprecation.mdx'
Couchbase is one of the supported plugins for the database secrets engine. This
plugin generates database credentials dynamically based on configured roles for
the Couchbase database.
See the [database secrets engine](/vault/docs/secrets/databases) docs for
more information about setting up the database secrets engine.
## Capabilities
| Plugin Name | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |
| --------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |
| `couchbase-database-plugin` | Yes | Yes | Yes | Yes (1.7+) |
## Setup
1. Enable the database secrets engine if it is not already enabled:
```bash
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure Vault with the proper plugin and connection configuration:
```bash
$ vault write database/config/my-couchbase-database \
plugin_name="couchbase-database-plugin" \
hosts="couchbase://127.0.0.1" \
tls=true \
base64pem="${BASE64PEM}" \
username="vaultuser" \
password="vaultpass" \
allowed_roles="my-*-role"
```
Where `${BASE64PEM}` is the server's root certificate authority in PEM
format, encoded as a base64 string with no new lines.
To connect to clusters prior to version 6.5.0, a `bucket_name` must also
be configured:
```bash
$ vault write database/config/my-couchbase-database \
plugin_name="couchbase-database-plugin" \
hosts="couchbase://127.0.0.1" \
tls=true \
base64pem="${BASE64PEM}" \
username="vaultuser" \
password="vaultpass" \
allowed_roles="my-*-role" \
bucket_name="travel-sample"
```
1. You should consider rotating the admin password. Note that if you do, the
new password will never be made available through Vault, so you should
create a Vault-specific database admin user for this.
```bash
vault write -force database/rotate-root/my-couchbase-database
```
## Usage
After the secrets engine is configured, configure dynamic and static roles
to enable generating credentials.
### Dynamic roles
1. Configure a dynamic role that maps a name in Vault to a JSON string
specifying a Couchbase RBAC role. The default value for
`creation_statements` is a read-only admin role:
`{"Roles": [{"role":"ro_admin"}]}`.
```bash
$ vault write database/roles/my-dynamic-role \
db_name="my-couchbase-database" \
creation_statements='{"Roles": [{"role":"ro_admin"}]}' \
default_ttl="5m" \
max_ttl="1h"
```
Note that any groups specified in the creation statement must already exist.
1. Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```bash
$ vault read database/creds/my-dynamic-role
Key Value
--- -----
lease_id database/creds/my-dynamic-role/wiLNQjtcvCOT1VnN3qnUJnBz
lease_duration 5m
lease_renewable true
password mhyM-Gs7IpmOPnSqXEDe
username v-root-my-dynamic-role-eXnVr4gm55dpM1EVgTYz-1596815027
```
### Static roles
1. Configure a static role that maps a name in Vault to an existing couchbase
user.
```bash
$ vault write database/static-roles/my-static-role \
db_name="my-couchbase-database" \
username="my-existing-couchbase-user" \
rotation_period=5m
```
1. Retrieve the credentials from the `/static-creds` endpoint:
```bash
$ vault read database/static-creds/my-static-role
Key Value
--- -----
last_vault_rotation 2020-08-07T16:50:48.393354+01:00
password Z4-KH8F-VK5VJc0hSkXQ
rotation_period 5m
ttl 4m39s
username my-existing-couchbase-user
```
## API
The full list of configurable options can be seen in the [Couchbase database plugin API](/vault/api-docs/secret/databases/couchbase) page.
For more information on the database secrets engine's HTTP API please see the [Database secret secrets engine API](/vault/api-docs/secret/databases) page. | vault | layout docs page title Couchbase Database Secrets Engines description Couchbase is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for the Couchbase database Couchbase database secrets engine include x509 sha1 deprecation mdx Couchbase is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for the Couchbase database See the database secrets engine vault docs secrets databases docs for more information about setting up the database secrets engine Capabilities Plugin Name Root Credential Rotation Dynamic Roles Static Roles Username Customization couchbase database plugin Yes Yes Yes Yes 1 7 Setup 1 Enable the database secrets engine if it is not already enabled bash vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure Vault with the proper plugin and connection configuration bash vault write database config my couchbase database plugin name couchbase database plugin hosts couchbase 127 0 0 1 tls true base64pem BASE64PEM username vaultuser password vaultpass allowed roles my role Where BASE64PEM is the server s root certificate authority in PEM format encoded as a base64 string with no new lines To connect to clusters prior to version 6 5 0 a bucket name must also be configured bash vault write database config my couchbase database plugin name couchbase database plugin hosts couchbase 127 0 0 1 tls true base64pem BASE64PEM username vaultuser password vaultpass allowed roles my role bucket name travel sample 1 You should consider rotating the admin password Note that if you do the new password will never be made available through Vault so you should create a Vault specific database admin user for this bash vault write force database rotate root my couchbase database Usage After the secrets engine is configured configure dynamic and static roles to enable generating credentials Dynamic roles 1 Configure a dynamic role that maps a name in Vault to a JSON string specifying a Couchbase RBAC role The default value for creation statements is a read only admin role Roles role ro admin bash vault write database roles my dynamic role db name my couchbase database creation statements Roles role ro admin default ttl 5m max ttl 1h Note that any groups specified in the creation statement must already exist 1 Generate a new credential by reading from the creds endpoint with the name of the role bash vault read database creds my dynamic role Key Value lease id database creds my dynamic role wiLNQjtcvCOT1VnN3qnUJnBz lease duration 5m lease renewable true password mhyM Gs7IpmOPnSqXEDe username v root my dynamic role eXnVr4gm55dpM1EVgTYz 1596815027 Static roles 1 Configure a static role that maps a name in Vault to an existing couchbase user bash vault write database static roles my static role db name my couchbase database username my existing couchbase user rotation period 5m 1 Retrieve the credentials from the static creds endpoint bash vault read database static creds my static role Key Value last vault rotation 2020 08 07T16 50 48 393354 01 00 password Z4 KH8F VK5VJc0hSkXQ rotation period 5m ttl 4m39s username my existing couchbase user API The full list of configurable options can be seen in the Couchbase database plugin API vault api docs secret databases couchbase page For more information on the database secrets engine s HTTP API please see the Database secret secrets engine API vault api docs secret databases page |
vault plugin generates database credentials dynamically based on configured roles for the MSSQL database layout docs MSSQL is one of the supported plugins for the database secrets engine This page title MSSQL Database Secrets Engines | ---
layout: docs
page_title: MSSQL - Database - Secrets Engines
description: |-
MSSQL is one of the supported plugins for the database secrets engine. This
plugin generates database credentials dynamically based on configured roles
for the MSSQL database.
---
# MSSQL database secrets engine
MSSQL is one of the supported plugins for the database secrets engine. This
plugin generates database credentials dynamically based on configured roles for
the MSSQL database.
See the [database secrets engine](/vault/docs/secrets/databases) docs for
more information about setting up the database secrets engine.
The following privileges are needed by the plugin for minimum functionality. Additional privileges may be needed
depending on the SQL configured on the database roles.
```sql
-- Create Login
CREATE LOGIN vault_login WITH PASSWORD = '<password>';
-- Create User
CREATE user vault_user for login vault_login;
-- Grant Permissions
GRANT ALTER ANY LOGIN TO "vault_user";
GRANT ALTER ANY USER TO "vault_user";
GRANT ALTER ANY CONNECTION TO "vault_login";
GRANT CONTROL ON SCHEMA::<schema_name> TO "vault_user";
EXEC sp_addrolemember "db_accessadmin", "vault_user";
```
## Capabilities
| Plugin Name | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |
| ----------------------- | ------------------------ | ------------- | ------------ | ---------------------- |
| `mssql-database-plugin` | Yes | Yes | Yes | Yes (1.7+) |
## Setup
1. Enable the database secrets engine if it is not already enabled:
```text
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure Vault with the proper plugin and connection information:
```text
$ vault write database/config/my-mssql-database \
plugin_name=mssql-database-plugin \
connection_url='sqlserver://:@localhost:1433' \
allowed_roles="my-role" \
username="vaultuser" \
password="yourStrong(!)Password"
```
~> Note: The example above demonstrates a connection with SQL server user named `vaultuser`, although the user `vaultuser` might be Windows Authentication user part of Active Directory domain, for example: `DOMAIN\vaultuser`.
In this case, we've configured Vault with the user "vaultuser" and password
"yourStrong(!)Password", connecting to an instance at "localhost" on port 1433. It is
not necessary that Vault has the vaultuser login, but the user must have privileges
to create logins and manage processes. The fixed server roles
`securityadmin` and `processadmin` are examples of built-in roles that grant
these permissions. The user also must have privileges to create database
users and grant permissions in the databases that Vault manages. The fixed
database roles `db_accessadmin` and `db_securityadmin` are examples or
built-in roles that grant these permissions.
1. Configure a role that maps a name in Vault to an SQL statement to execute to
create the database credential:
```text
$ vault write database/roles/my-role \
db_name=my-mssql-database \
creation_statements="CREATE LOGIN [] WITH PASSWORD = '';\
CREATE USER [] FOR LOGIN [];\
GRANT SELECT ON SCHEMA::dbo TO [];" \
default_ttl="1h" \
max_ttl="24h"
Success! Data written to: database/roles/my-role
```
~> **Be aware!** If no `revocation_statement` is supplied,
vault will execute the default revocation procedure.
In larger databases, this might cause connection timeouts.
Please specify a revocation statement in such a scenario.
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can generate credentials.
1. Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```text
$ vault read database/creds/my-role
Key Value
--- -----
lease_id database/creds/my-role/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6
lease_duration 1h
lease_renewable true
password wJKpk9kg-T1Ma7qQfS8y
username v-vaultuser-my-role-r7kCtKGGr3eYQP1OGR6G-1602542258
```
## Example for Azure SQL database
Here is a complete example using Azure SQL Database. Note that databases in Azure SQL Database are [contained databases](https://docs.microsoft.com/en-us/sql/relational-databases/databases/contained-databases) and that we do not create a login for the user; instead, we associate the password directly with the user itself. Also note that you will need a separate connection and role for each Azure SQL database for which you want to generate dynamic credentials. You can use a single database backend mount for all these databases or use a separate mount for each of them. In this example, we use a custom path for the database backend.
<Note>
Azure SQL databases may use different authentication mechanism that are configured on the SQL server. Vault only supports SQL authentication. Azure AD authentication is not supported.
</Note>
First, we mount a database backend at the azuresql path with `vault secrets enable -path=azuresql database`. Then we configure a connection called "testvault" to connect to a database called "test-vault", using "azuresql" at the beginning of our path:
~> Note: If you are using a windows vault client with cmd.exe, change the single quotes to double quotes in the connection string. Windows cmd.exe does not interpret single quotes as a continous string
```shell-session
$ vault write azuresql/config/testvault \
plugin_name=mssql-database-plugin \
connection_url='server=hashisqlserver.database.windows.net;port=1433;user id=admin;password=pAssw0rd;database=test-vault;app name=vault;' \
allowed_roles="test"
```
Now we add a role called "test" for use with the "testvault" connection:
```shell-session
$ vault write azuresql/roles/test \
db_name=testvault \
creation_statements="CREATE USER [] WITH PASSWORD = '';" \
revocation_statements="DROP USER IF EXISTS []" \
default_ttl="1h" \
max_ttl="24h"
```
We can now use this role to dynamically generate credentials for the Azure SQL database, test-vault:
```shell-session
$ vault read azuresql/creds/test
Key Value
--- -----
lease_id azuresql/creds/test/2e5b1e0b-a081-c7e1-5622-39f58e79a719
lease_duration 1h0m0s
lease_renewable true
password cZ-BJy-SqO5tKwazAuUP
username v-token-test-tr2t4x9pxvq1z8878s9s-1513446795
```
When we no longer need the backend, we can unmount it with `vault unmount azuresql`. Now, you can use the MSSQL Database Plugin with your Azure SQL databases.
## Amazon RDS
The MSSQL plugin supports databases running on [Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_SQLServer.html),
but there are differences that need to be accommodated. A key limitation is that Amazon RDS doesn't support
the "sysadmin" role, which is used by default during Vault's revocation process for MSSQL. The workaround
is to add custom revocation statements to roles, for example:
```shell
vault write database/roles/my-role revocation_statements="\
USE my_database \
IF EXISTS \
(SELECT name \
FROM sys.database_principals \
WHERE name = N'') \
BEGIN \
DROP USER [] \
END \
IF EXISTS \
(SELECT name \
FROM master.sys.server_principals \
WHERE name = N'') \
BEGIN \
DROP LOGIN [] \
END"
```
## API
The full list of configurable options can be seen in the [MSSQL database
plugin API](/vault/api-docs/secret/databases/mssql) page.
For more information on the database secrets engine's HTTP API please see the
[Database secrets engine API](/vault/api-docs/secret/databases) page. | vault | layout docs page title MSSQL Database Secrets Engines description MSSQL is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for the MSSQL database MSSQL database secrets engine MSSQL is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for the MSSQL database See the database secrets engine vault docs secrets databases docs for more information about setting up the database secrets engine The following privileges are needed by the plugin for minimum functionality Additional privileges may be needed depending on the SQL configured on the database roles sql Create Login CREATE LOGIN vault login WITH PASSWORD password Create User CREATE user vault user for login vault login Grant Permissions GRANT ALTER ANY LOGIN TO vault user GRANT ALTER ANY USER TO vault user GRANT ALTER ANY CONNECTION TO vault login GRANT CONTROL ON SCHEMA schema name TO vault user EXEC sp addrolemember db accessadmin vault user Capabilities Plugin Name Root Credential Rotation Dynamic Roles Static Roles Username Customization mssql database plugin Yes Yes Yes Yes 1 7 Setup 1 Enable the database secrets engine if it is not already enabled text vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure Vault with the proper plugin and connection information text vault write database config my mssql database plugin name mssql database plugin connection url sqlserver localhost 1433 allowed roles my role username vaultuser password yourStrong Password Note The example above demonstrates a connection with SQL server user named vaultuser although the user vaultuser might be Windows Authentication user part of Active Directory domain for example DOMAIN vaultuser In this case we ve configured Vault with the user vaultuser and password yourStrong Password connecting to an instance at localhost on port 1433 It is not necessary that Vault has the vaultuser login but the user must have privileges to create logins and manage processes The fixed server roles securityadmin and processadmin are examples of built in roles that grant these permissions The user also must have privileges to create database users and grant permissions in the databases that Vault manages The fixed database roles db accessadmin and db securityadmin are examples or built in roles that grant these permissions 1 Configure a role that maps a name in Vault to an SQL statement to execute to create the database credential text vault write database roles my role db name my mssql database creation statements CREATE LOGIN WITH PASSWORD CREATE USER FOR LOGIN GRANT SELECT ON SCHEMA dbo TO default ttl 1h max ttl 24h Success Data written to database roles my role Be aware If no revocation statement is supplied vault will execute the default revocation procedure In larger databases this might cause connection timeouts Please specify a revocation statement in such a scenario Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can generate credentials 1 Generate a new credential by reading from the creds endpoint with the name of the role text vault read database creds my role Key Value lease id database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6 lease duration 1h lease renewable true password wJKpk9kg T1Ma7qQfS8y username v vaultuser my role r7kCtKGGr3eYQP1OGR6G 1602542258 Example for Azure SQL database Here is a complete example using Azure SQL Database Note that databases in Azure SQL Database are contained databases https docs microsoft com en us sql relational databases databases contained databases and that we do not create a login for the user instead we associate the password directly with the user itself Also note that you will need a separate connection and role for each Azure SQL database for which you want to generate dynamic credentials You can use a single database backend mount for all these databases or use a separate mount for each of them In this example we use a custom path for the database backend Note Azure SQL databases may use different authentication mechanism that are configured on the SQL server Vault only supports SQL authentication Azure AD authentication is not supported Note First we mount a database backend at the azuresql path with vault secrets enable path azuresql database Then we configure a connection called testvault to connect to a database called test vault using azuresql at the beginning of our path Note If you are using a windows vault client with cmd exe change the single quotes to double quotes in the connection string Windows cmd exe does not interpret single quotes as a continous string shell session vault write azuresql config testvault plugin name mssql database plugin connection url server hashisqlserver database windows net port 1433 user id admin password pAssw0rd database test vault app name vault allowed roles test Now we add a role called test for use with the testvault connection shell session vault write azuresql roles test db name testvault creation statements CREATE USER WITH PASSWORD revocation statements DROP USER IF EXISTS default ttl 1h max ttl 24h We can now use this role to dynamically generate credentials for the Azure SQL database test vault shell session vault read azuresql creds test Key Value lease id azuresql creds test 2e5b1e0b a081 c7e1 5622 39f58e79a719 lease duration 1h0m0s lease renewable true password cZ BJy SqO5tKwazAuUP username v token test tr2t4x9pxvq1z8878s9s 1513446795 When we no longer need the backend we can unmount it with vault unmount azuresql Now you can use the MSSQL Database Plugin with your Azure SQL databases Amazon RDS The MSSQL plugin supports databases running on Amazon RDS https docs aws amazon com AmazonRDS latest UserGuide CHAP SQLServer html but there are differences that need to be accommodated A key limitation is that Amazon RDS doesn t support the sysadmin role which is used by default during Vault s revocation process for MSSQL The workaround is to add custom revocation statements to roles for example shell vault write database roles my role revocation statements USE my database IF EXISTS SELECT name FROM sys database principals WHERE name N BEGIN DROP USER END IF EXISTS SELECT name FROM master sys server principals WHERE name N BEGIN DROP LOGIN END API The full list of configurable options can be seen in the MSSQL database plugin API vault api docs secret databases mssql page For more information on the database secrets engine s HTTP API please see the Database secrets engine API vault api docs secret databases page |
vault Redis ElastiCache database secrets engine Redis ElastiCache is one of the supported plugins for the database secrets engine layout docs page title Redis ElastiCache Database Secrets Engines This plugin generates static credentials for existing managed roles | ---
layout: docs
page_title: Redis ElastiCache - Database - Secrets Engines
description: |-
Redis ElastiCache is one of the supported plugins for the database secrets engine.
This plugin generates static credentials for existing managed roles.
---
# Redis ElastiCache database secrets engine
Redis ElastiCache is one of the supported plugins for the database secrets engine.
This plugin generates static credentials for existing managed roles.
See the [database secrets engine](/vault/docs/secrets/databases) docs for
more information about setting up the database secrets engine.
## Capabilities
| Plugin Name | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |
| --------------------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |
| `redis-elasticache-database-plugin` | No | No | Yes | No |
## Setup
1. Enable the database secrets engine if it is not already enabled:
```shell-session
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure Vault with the proper plugin and connection configuration:
```shell-session
$ vault write database/config/my-redis-elasticache-cluster \
plugin_name="redis-elasticache-database-plugin" \
url="primary-endpoint.my-cluster.xxx.yyy.cache.amazonaws.com:6379" \
access_key_id="AKI***" \
secret_access_key="ktriNYvULAWLzUmTGb***" \
region=us-east-1 \
allowed_roles="*"
```
~> **Note**: The `access_key_id`, `secret_access_key` and `region` parameters are optional. If omitted, authentication falls back
on the AWS credentials provider chain.
~> **Deprecated**: The `username` & `password` parameters are deprecated but supported for backward compatibility. They are replaced
by the equivalent `access_key_id` and `secret_access_key` parameters respectively.
The Redis ElastiCache secrets engine must use AWS credentials that have sufficient permissions to manage ElastiCache users.
This IAM policy sample can be used as an example. Note that <region> and <account-id>
must correspond to your own environment.
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"elasticache:ModifyUser",
"elasticache:DescribeUsers"
],
"Resource": "arn:aws:elasticache:<region>:<account-id>:user:*"
}
]
}
```
## Usage
After the secrets engine is configured, write static roles to enable generating credentials.
### Static roles
1. Configure a static role that maps a name in Vault to an existing Redis ElastiCache user.
```shell-session
$ vault write database/static-roles/my-static-role \
db_name="my-redis-elasticache-cluster" \
username="my-existing-redis-user" \
rotation_period=5m
Success! Data written to: database/static-roles/my-static-role
```
1. Retrieve the credentials from the `/static-creds` endpoint:
```shell-session
$ vault read database/static-creds/my-static-role
Key Value
--- -----
last_vault_rotation 2022-09-14T11:45:57.24715105-04:00
password GKdS6qY-UtVAMpcD9iuu
rotation_period 5m
ttl 4m48s
username my-existing-redis-user
```
~> **Note**: New passwords may take up-to a couple of minutes before ElastiCache has the chance to complete their configuration.
It is recommended to use a retry strategy when establishing new Redis ElastiCache connections. This may prevent errors when
trying to use a password that isn't yet live on the targeted ElastiCache cluster.
## API
The full list of configurable options can be seen in the [Redis ElastiCache Database Plugin API](/vault/api-docs/secret/databases/rediselasticache) page.
For more information on the database secrets engine's HTTP API please see the [Database Secrets Engine API](/vault/api-docs/secret/databases) page. | vault | layout docs page title Redis ElastiCache Database Secrets Engines description Redis ElastiCache is one of the supported plugins for the database secrets engine This plugin generates static credentials for existing managed roles Redis ElastiCache database secrets engine Redis ElastiCache is one of the supported plugins for the database secrets engine This plugin generates static credentials for existing managed roles See the database secrets engine vault docs secrets databases docs for more information about setting up the database secrets engine Capabilities Plugin Name Root Credential Rotation Dynamic Roles Static Roles Username Customization redis elasticache database plugin No No Yes No Setup 1 Enable the database secrets engine if it is not already enabled shell session vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure Vault with the proper plugin and connection configuration shell session vault write database config my redis elasticache cluster plugin name redis elasticache database plugin url primary endpoint my cluster xxx yyy cache amazonaws com 6379 access key id AKI secret access key ktriNYvULAWLzUmTGb region us east 1 allowed roles Note The access key id secret access key and region parameters are optional If omitted authentication falls back on the AWS credentials provider chain Deprecated The username password parameters are deprecated but supported for backward compatibility They are replaced by the equivalent access key id and secret access key parameters respectively The Redis ElastiCache secrets engine must use AWS credentials that have sufficient permissions to manage ElastiCache users This IAM policy sample can be used as an example Note that lt region gt and lt account id gt must correspond to your own environment json Version 2012 10 17 Statement Sid Effect Allow Action elasticache ModifyUser elasticache DescribeUsers Resource arn aws elasticache region account id user Usage After the secrets engine is configured write static roles to enable generating credentials Static roles 1 Configure a static role that maps a name in Vault to an existing Redis ElastiCache user shell session vault write database static roles my static role db name my redis elasticache cluster username my existing redis user rotation period 5m Success Data written to database static roles my static role 1 Retrieve the credentials from the static creds endpoint shell session vault read database static creds my static role Key Value last vault rotation 2022 09 14T11 45 57 24715105 04 00 password GKdS6qY UtVAMpcD9iuu rotation period 5m ttl 4m48s username my existing redis user Note New passwords may take up to a couple of minutes before ElastiCache has the chance to complete their configuration It is recommended to use a retry strategy when establishing new Redis ElastiCache connections This may prevent errors when trying to use a password that isn t yet live on the targeted ElastiCache cluster API The full list of configurable options can be seen in the Redis ElastiCache Database Plugin API vault api docs secret databases rediselasticache page For more information on the database secrets engine s HTTP API please see the Database Secrets Engine API vault api docs secret databases page |
vault page title Redis Database Secrets Engines roles for the Redis database and also supports Static Roles vault docs secrets databases static roles Redis database secrets engine layout docs This plugin generates database credentials dynamically based on configured Redis is one of the supported plugins for the database secrets engine | ---
layout: docs
page_title: Redis - Database - Secrets Engines
description: |-
Redis is one of the supported plugins for the database secrets engine.
This plugin generates database credentials dynamically based on configured
roles for the Redis database, and also supports [Static Roles](/vault/docs/secrets/databases#static-roles).
---
# Redis database secrets engine
Redis is one of the supported plugins for the database secrets engine. This
plugin generates database credentials dynamically based on configured roles for
the Redis database.
See the [database secrets engine](/vault/docs/secrets/databases) docs for
more information about setting up the database secrets engine.
## Capabilities
| Plugin Name | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization |
| --------------------------- | ------------------------ | ------------- | ------------ | ---------------------- |
| `redis-database-plugin` | Yes | Yes | Yes | No |
## Setup
1. Enable the database secrets engine if it is not already enabled:
```shell-session
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure Vault with the proper plugin and connection configuration:
```shell-session
$ vault write database/config/my-redis-database \
plugin_name="redis-database-plugin" \
host="localhost" \
port=6379 \
tls=true \
ca_cert="$CACERT" \
username="user" \
password="pass" \
allowed_roles="my-*-role"
```
1. You should consider rotating the admin password. Note that if you do, the
new password will never be made available through Vault, so you should
create a Vault-specific database admin user for this.
```shell-session
vault write -force database/rotate-root/my-redis-database
```
## Usage
After the secrets engine is configured, write dynamic and static roles
to Vault to enable generating credentials.
### Dynamic roles
1. Configure a dynamic role that maps a name in Vault to a JSON string
containing the Redis ACL rules, which are either documented [here](https://redis.io/commands/acl-cat) or in the output
of the `ACL CAT` Redis command.
```shell-session
$ vault write database/roles/my-dynamic-role \
db_name="my-redis-database" \
creation_statements='["+@admin"]' \
default_ttl="5m" \
max_ttl="1h"
Success! Data written to: database/roles/my-dynamic-role
```
Note that if a creation_statement is not provided the user account will
default to a read only user, `'["~*", "+@read"]'` that can read any key.
1. Generate a new set of credentials by reading from the `/creds` endpoint with the name
of the role:
```shell-session
$ vault read database/creds/my-dynamic-role
Key Value
--- -----
lease_id database/creds/my-dynamic-role/OxCTXJcxQ2F4lReWPjbezSnA
lease_duration 5m
lease_renewable true
password dACqHsav6-attdv1glGZ
username V_TOKEN_MY-DYNAMIC-ROLE_YASUQUF3GVVD0ZWTEMK4_1608481717
```
### Static roles
1. Configure a static role that maps a name in Vault to an existing Redis
user.
```shell-session
$ vault write database/static-roles/my-static-role \
db_name="my-redis-database" \
username="my-existing-redis-user" \
rotation_period=5m
Success! Data written to: database/static-roles/my-static-role
```
1. Retrieve the credentials from the `/static-creds` endpoint:
```shell-session
$ vault read database/static-creds/my-static-role
Key Value
--- -----
last_vault_rotation 2020-12-20T10:39:49.647822-06:00
password ylKNgqa3NPVAioBf-0S5
rotation_period 5m
ttl 4m39s
username my-existing-redis-user
```
## API
The full list of configurable options can be seen in the [Redis Database Plugin API](/vault/api-docs/secret/databases/redis) page.
For more information on the database secrets engine's HTTP API please see the [Database Secrets Engine API](/vault/api-docs/secret/databases) page. | vault | layout docs page title Redis Database Secrets Engines description Redis is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for the Redis database and also supports Static Roles vault docs secrets databases static roles Redis database secrets engine Redis is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for the Redis database See the database secrets engine vault docs secrets databases docs for more information about setting up the database secrets engine Capabilities Plugin Name Root Credential Rotation Dynamic Roles Static Roles Username Customization redis database plugin Yes Yes Yes No Setup 1 Enable the database secrets engine if it is not already enabled shell session vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure Vault with the proper plugin and connection configuration shell session vault write database config my redis database plugin name redis database plugin host localhost port 6379 tls true ca cert CACERT username user password pass allowed roles my role 1 You should consider rotating the admin password Note that if you do the new password will never be made available through Vault so you should create a Vault specific database admin user for this shell session vault write force database rotate root my redis database Usage After the secrets engine is configured write dynamic and static roles to Vault to enable generating credentials Dynamic roles 1 Configure a dynamic role that maps a name in Vault to a JSON string containing the Redis ACL rules which are either documented here https redis io commands acl cat or in the output of the ACL CAT Redis command shell session vault write database roles my dynamic role db name my redis database creation statements admin default ttl 5m max ttl 1h Success Data written to database roles my dynamic role Note that if a creation statement is not provided the user account will default to a read only user read that can read any key 1 Generate a new set of credentials by reading from the creds endpoint with the name of the role shell session vault read database creds my dynamic role Key Value lease id database creds my dynamic role OxCTXJcxQ2F4lReWPjbezSnA lease duration 5m lease renewable true password dACqHsav6 attdv1glGZ username V TOKEN MY DYNAMIC ROLE YASUQUF3GVVD0ZWTEMK4 1608481717 Static roles 1 Configure a static role that maps a name in Vault to an existing Redis user shell session vault write database static roles my static role db name my redis database username my existing redis user rotation period 5m Success Data written to database static roles my static role 1 Retrieve the credentials from the static creds endpoint shell session vault read database static creds my static role Key Value last vault rotation 2020 12 20T10 39 49 647822 06 00 password ylKNgqa3NPVAioBf 0S5 rotation period 5m ttl 4m39s username my existing redis user API The full list of configurable options can be seen in the Redis Database Plugin API vault api docs secret databases redis page For more information on the database secrets engine s HTTP API please see the Database Secrets Engine API vault api docs secret databases page |
vault PostgreSQL database secrets engine page title PostgreSQL Database Secrets Engines layout docs This plugin generates database credentials dynamically based on configured PostgreSQL is one of the supported plugins for the database secrets engine roles for the PostgreSQL database | ---
layout: docs
page_title: PostgreSQL - Database - Secrets Engines
description: |-
PostgreSQL is one of the supported plugins for the database secrets engine.
This plugin generates database credentials dynamically based on configured
roles for the PostgreSQL database.
---
# PostgreSQL database secrets engine
PostgreSQL is one of the supported plugins for the database secrets engine. This
plugin generates database credentials dynamically based on configured roles for
the PostgreSQL database, and also supports [Static
Roles](/vault/docs/secrets/databases#static-roles).
See the [database secrets engine](/vault/docs/secrets/databases) docs for
more information about setting up the database secrets engine.
The PostgreSQL secrets engine uses [pgx][pgxlib], the same database library as the
[PostgreSQL storage backend](/vault/docs/configuration/storage/postgresql). Connection string
options, including SSL options, can be found in the [pgx][pgxlib] and
[PostgreSQL connection string][pg_conn_docs] documentation.
## Capabilities
| Plugin Name | Root Credential Rotation | Dynamic Roles | Static Roles | Username Customization | Credential Types |
| ---------------------------- | ------------------------ | ------------- | ------------ | ---------------------- | ---------------------------- |
| `postgresql-database-plugin` | Yes | Yes | Yes | Yes (1.7+) | password, gcp_iam |
## Setup
1. Enable the database secrets engine if it is not already enabled:
```shell-session
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure Vault with the proper plugin and connection information:
```shell-session
$ vault write database/config/my-postgresql-database \
plugin_name="postgresql-database-plugin" \
allowed_roles="my-role" \
connection_url="postgresql://:@localhost:5432/database-name" \
username="vaultuser" \
password="vaultpass" \
password_authentication="scram-sha-256"
```
1. Configure a role that maps a name in Vault to an SQL statement to execute to
create the database credential:
```shell-session
$ vault write database/roles/my-role \
db_name="my-postgresql-database" \
creation_statements="CREATE ROLE \"\" WITH LOGIN PASSWORD '' VALID UNTIL ''; \
GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"\";" \
default_ttl="1h" \
max_ttl="24h"
Success! Data written to: database/roles/my-role
```
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can generate credentials.
1. Generate a new credential by reading from the `/creds` endpoint with the name
of the role:
```shell-session
$ vault read database/creds/my-role
Key Value
--- -----
lease_id database/creds/my-role/2f6a614c-4aa2-7b19-24b9-ad944a8d4de6
lease_duration 1h
lease_renewable true
password SsnoaA-8Tv4t34f41baD
username v-vaultuse-my-role-x
```
## Rootless Configuration and Password Rotation for Static Roles
<EnterpriseAlert product="vault" />
The PostgreSQL secrets engine supports using Static Roles and its password rotation mechanisms with a Rootless
DB connection configuration. In this workflow, a static DB user can be onboarded onto Vault's static role rotation
mechanism without the need of privileged root accounts to configure the connection. Instead of using a single root
connection, multiple dedicated connections to the DB are made for each static role. This workflow does not support
dynamic roles/credentials.
~> Note: It is **highly recommended** that the DB users being onboarded as static roles
have the minimum set of privileges. Each static role will open a new connection into the DB.
Granting minimum privileges to the DB users being onboarded ensures that multiple
highly-privileged connections to an external system are not being made.
~> Note: Out-of-band password rotations will cause Vault to be out of sync with the state of
the DB user, and will require manually updating the user's password in the external PostgreSQL
DB in order to resolve any errors encountered during rotation.
1. Enable the database secrets engine if it is not already enabled:
```shell-session
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure connection to DB without root credentials and enable the rootless
workflow by setting the `self_managed` parameter:
```shell-session
$ vault write database/config/my-postgresql-database \
plugin_name="postgresql-database-plugin" \
allowed_roles="my-role" \
connection_url="postgresql://:@localhost:5432/database-name" \
self_managed=true
```
1. Configure a static role that creates a dedicated connection to a user in the DB with
the `self_managed_password` parameter:
```shell-session
$ vault write database/static-roles/my-static-role \
db_name="my-postgresql-database" \
username="staticuser" \
self_managed_password="password" \
rotation_period="1h"
```
1. Read static credentials:
```shell-session
$ vault read database/static-creds/static-test
Key Value
--- -----
last_vault_rotation 2024-09-11T14:15:13.764783-07:00
password XZY42BVc-UO5bMsbgxrW
rotation_period 1h
ttl 59m55s
username staticuser
```
## Client x509 certificate authentication
This plugin supports using PostgreSQl's [x509 Client-side Certificate Authentication](https://www.postgresql.org/docs/16/libpq-ssl.html#LIBPQ-SSL-CLIENTCERT).
To use this authentication mechanism, configure the plugin to consume the
PEM-encoded TLS data inline from a file on disk by prefixing with the "@"
symbol. This is useful in environments where you do not have direct access to
the machine that is hosting the Vault server. For example:
```shell-session
$ vault write database/config/my-postgresql-database \
plugin_name="postgresql-database-plugin" \
allowed_roles="my-role" \
connection_url="postgresql://:@localhost:5432/database-name?sslmode=verify-full" \
username="vaultuser" \
private_key=@/path/to/client.key \
tls_certificate=@/path/to/client.pem \
tls_ca=@/path/to/client.ca
```
Note: `private_key`, `tls_certificate`, and `tls_ca` map to [`sslkey`][sslkey_docs],
[`sslcert`][sslcert_docs], and [`sslrootcert`][sslrootcert_docs] configuration
options from PostgreSQL with the exception that the Vault parameters are the
contents of those files, not filenames.
[sslkey_docs]: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-SSLKEY
[sslcert_docs]: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-SSLCERT
[sslrootcert_docs]: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-SSLROOTCERT
Alternatively, you can configure certificate authentication in environments
where the TLS certificate data is present on the machine that is running the
Vault server process. Set `sslmode` to be any of the applicable values as
outlined in the PostgreSQL documentation and set the SSL credentials in the
`sslrootcert`, `sslcert` and `sslkey` connection parameters as paths to files.
For example:
```shell-session
$ export SSL="sslmode=verify-full&sslrootcert=/path/to/ca.pem&sslcert=/path/to/client.pem&sslkey=/path/to/client.key"
$ vault write database/config/my-postgresql-database \
plugin_name="postgresql-database-plugin" \
allowed_roles="my-role" \
connection_url="postgresql://:@localhost:5432/database-name?sslmode=verify-full&${SSL}" \
username="vaultuser"
```
## API
The full list of configurable options can be seen in the [PostgreSQL database
plugin API](/vault/api-docs/secret/databases/postgresql) page.
For more information on the database secrets engine's HTTP API please see the
[Database secrets engine API](/vault/api-docs/secret/databases) page.
[pgxlib]: https://pkg.go.dev/github.com/jackc/pgx/stdlib
[pg_conn_docs]: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
## Authenticating to Cloud DBs via IAM
### Google Cloud
Aside from IAM roles denoted by [Google's CloudSQL documentation](https://cloud.google.com/sql/docs/postgres/add-manage-iam-users#creating-a-database-user),
the following SQL privileges are needed by the service account's DB user for minimum functionality with Vault.
Additional privileges may be needed depending on the SQL configured on the database roles.
```sql
-- Enable service account to create roles within DB
ALTER USER "<YOUR DB USERNAME>" WITH CREATEROLE;
```
### Setup
1. Enable the database secrets engine if it is not already enabled:
```shell-session
$ vault secrets enable database
Success! Enabled the database secrets engine at: database/
```
By default, the secrets engine will enable at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
1. Configure Vault with the proper plugin and connection information. Here you can explicitly enable GCP IAM authentication
and use [Application Default Credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc#how-to) to authenticate:
```shell-session
$ vault write database/config/my-postgresql-database \
plugin_name="postgresql-database-plugin" \
allowed_roles="my-role" \
connection_url="host=project:us-west1:mydb [email protected] dbname=postgres sslmode=disable" \
use_private_ip="false" \
auth_type="gcp_iam"
```
You can also configure the connection and authenticate by directly passing in the service account credentials
as an encoded JSON string:
```shell-session
$ vault write database/config/my-postgresql-database \
plugin_name="postgresql-database-plugin" \
allowed_roles="my-role" \
connection_url="host=project:region:instance [email protected] dbname=postgres sslmode=disable" \
use_private_ip="false" \
auth_type="gcp_iam" \
service_account_json="@my_credentials.json"
```
Once the connection has been configured and IAM authentication is complete, the steps to set up a role and generate
credentials are the same as the ones listed above. | vault | layout docs page title PostgreSQL Database Secrets Engines description PostgreSQL is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for the PostgreSQL database PostgreSQL database secrets engine PostgreSQL is one of the supported plugins for the database secrets engine This plugin generates database credentials dynamically based on configured roles for the PostgreSQL database and also supports Static Roles vault docs secrets databases static roles See the database secrets engine vault docs secrets databases docs for more information about setting up the database secrets engine The PostgreSQL secrets engine uses pgx pgxlib the same database library as the PostgreSQL storage backend vault docs configuration storage postgresql Connection string options including SSL options can be found in the pgx pgxlib and PostgreSQL connection string pg conn docs documentation Capabilities Plugin Name Root Credential Rotation Dynamic Roles Static Roles Username Customization Credential Types postgresql database plugin Yes Yes Yes Yes 1 7 password gcp iam Setup 1 Enable the database secrets engine if it is not already enabled shell session vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure Vault with the proper plugin and connection information shell session vault write database config my postgresql database plugin name postgresql database plugin allowed roles my role connection url postgresql localhost 5432 database name username vaultuser password vaultpass password authentication scram sha 256 1 Configure a role that maps a name in Vault to an SQL statement to execute to create the database credential shell session vault write database roles my role db name my postgresql database creation statements CREATE ROLE WITH LOGIN PASSWORD VALID UNTIL GRANT SELECT ON ALL TABLES IN SCHEMA public TO default ttl 1h max ttl 24h Success Data written to database roles my role Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can generate credentials 1 Generate a new credential by reading from the creds endpoint with the name of the role shell session vault read database creds my role Key Value lease id database creds my role 2f6a614c 4aa2 7b19 24b9 ad944a8d4de6 lease duration 1h lease renewable true password SsnoaA 8Tv4t34f41baD username v vaultuse my role x Rootless Configuration and Password Rotation for Static Roles EnterpriseAlert product vault The PostgreSQL secrets engine supports using Static Roles and its password rotation mechanisms with a Rootless DB connection configuration In this workflow a static DB user can be onboarded onto Vault s static role rotation mechanism without the need of privileged root accounts to configure the connection Instead of using a single root connection multiple dedicated connections to the DB are made for each static role This workflow does not support dynamic roles credentials Note It is highly recommended that the DB users being onboarded as static roles have the minimum set of privileges Each static role will open a new connection into the DB Granting minimum privileges to the DB users being onboarded ensures that multiple highly privileged connections to an external system are not being made Note Out of band password rotations will cause Vault to be out of sync with the state of the DB user and will require manually updating the user s password in the external PostgreSQL DB in order to resolve any errors encountered during rotation 1 Enable the database secrets engine if it is not already enabled shell session vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure connection to DB without root credentials and enable the rootless workflow by setting the self managed parameter shell session vault write database config my postgresql database plugin name postgresql database plugin allowed roles my role connection url postgresql localhost 5432 database name self managed true 1 Configure a static role that creates a dedicated connection to a user in the DB with the self managed password parameter shell session vault write database static roles my static role db name my postgresql database username staticuser self managed password password rotation period 1h 1 Read static credentials shell session vault read database static creds static test Key Value last vault rotation 2024 09 11T14 15 13 764783 07 00 password XZY42BVc UO5bMsbgxrW rotation period 1h ttl 59m55s username staticuser Client x509 certificate authentication This plugin supports using PostgreSQl s x509 Client side Certificate Authentication https www postgresql org docs 16 libpq ssl html LIBPQ SSL CLIENTCERT To use this authentication mechanism configure the plugin to consume the PEM encoded TLS data inline from a file on disk by prefixing with the symbol This is useful in environments where you do not have direct access to the machine that is hosting the Vault server For example shell session vault write database config my postgresql database plugin name postgresql database plugin allowed roles my role connection url postgresql localhost 5432 database name sslmode verify full username vaultuser private key path to client key tls certificate path to client pem tls ca path to client ca Note private key tls certificate and tls ca map to sslkey sslkey docs sslcert sslcert docs and sslrootcert sslrootcert docs configuration options from PostgreSQL with the exception that the Vault parameters are the contents of those files not filenames sslkey docs https www postgresql org docs current libpq connect html LIBPQ CONNECT SSLKEY sslcert docs https www postgresql org docs current libpq connect html LIBPQ CONNECT SSLCERT sslrootcert docs https www postgresql org docs current libpq connect html LIBPQ CONNECT SSLROOTCERT Alternatively you can configure certificate authentication in environments where the TLS certificate data is present on the machine that is running the Vault server process Set sslmode to be any of the applicable values as outlined in the PostgreSQL documentation and set the SSL credentials in the sslrootcert sslcert and sslkey connection parameters as paths to files For example shell session export SSL sslmode verify full sslrootcert path to ca pem sslcert path to client pem sslkey path to client key vault write database config my postgresql database plugin name postgresql database plugin allowed roles my role connection url postgresql localhost 5432 database name sslmode verify full SSL username vaultuser API The full list of configurable options can be seen in the PostgreSQL database plugin API vault api docs secret databases postgresql page For more information on the database secrets engine s HTTP API please see the Database secrets engine API vault api docs secret databases page pgxlib https pkg go dev github com jackc pgx stdlib pg conn docs https www postgresql org docs current libpq connect html LIBPQ CONNSTRING Authenticating to Cloud DBs via IAM Google Cloud Aside from IAM roles denoted by Google s CloudSQL documentation https cloud google com sql docs postgres add manage iam users creating a database user the following SQL privileges are needed by the service account s DB user for minimum functionality with Vault Additional privileges may be needed depending on the SQL configured on the database roles sql Enable service account to create roles within DB ALTER USER YOUR DB USERNAME WITH CREATEROLE Setup 1 Enable the database secrets engine if it is not already enabled shell session vault secrets enable database Success Enabled the database secrets engine at database By default the secrets engine will enable at the name of the engine To enable the secrets engine at a different path use the path argument 1 Configure Vault with the proper plugin and connection information Here you can explicitly enable GCP IAM authentication and use Application Default Credentials https cloud google com docs authentication provide credentials adc how to to authenticate shell session vault write database config my postgresql database plugin name postgresql database plugin allowed roles my role connection url host project us west1 mydb user test user project iam dbname postgres sslmode disable use private ip false auth type gcp iam You can also configure the connection and authenticate by directly passing in the service account credentials as an encoded JSON string shell session vault write database config my postgresql database plugin name postgresql database plugin allowed roles my role connection url host project region instance user test user project iam dbname postgres sslmode disable use private ip false auth type gcp iam service account json my credentials json Once the connection has been configured and IAM authentication is complete the steps to set up a role and generate credentials are the same as the ones listed above |
vault page title KV Secrets Engines The kv secrets engine is used to store arbitrary secrets within the KV secrets engine version 1 configured physical storage for Vault layout docs The KV secrets engine can store arbitrary secrets | ---
layout: docs
page_title: KV - Secrets Engines
description: The KV secrets engine can store arbitrary secrets.
---
# KV secrets engine - version 1
The `kv` secrets engine is used to store arbitrary secrets within the
configured physical storage for Vault.
Writing to a key in the `kv` backend will replace the old value; sub-fields are
not merged together.
Key names must always be strings. If you write non-string values directly via
the CLI, they will be converted into strings. However, you can preserve
non-string values by writing the key/value pairs to Vault from a JSON file or
using the HTTP API.
This secrets engine honors the distinction between the `create` and `update`
capabilities inside ACL policies.
~> **Note**: Path and key names are _not_ obfuscated or encrypted; only the
values set on keys are. You should not store sensitive information as part of a
secret's path.
## Setup
To enable a version 1 kv store:
```shell-session
$ vault secrets enable -version=1 kv
```
## Usage
After the secrets engine is configured and a user/machine has a Vault token with
the proper permission, it can generate credentials. The `kv` secrets engine
allows for writing keys with arbitrary values.
1. Write arbitrary data:
```shell-session
$ vault kv put kv/my-secret my-value=s3cr3t
Success! Data written to: kv/my-secret
```
1. Read arbitrary data:
```shell-session
$ vault kv get kv/my-secret
Key Value
--- -----
my-value s3cr3t
```
1. List the keys:
```shell-session
$ vault kv list kv/
Keys
----
my-secret
```
1. Delete a key:
```shell-session
$ vault kv delete kv/my-secret
Success! Data deleted (if it existed) at: kv/my-secret
```
You can also use [Vault's password policy](/vault/docs/concepts/password-policies) feature to generate arbitrary values.
1. Write a password policy:
```shell-session
$ vault write sys/policies/password/example policy=-<<EOF
length=20
rule "charset" {
charset = "abcdefghij0123456789"
min-chars = 1
}
rule "charset" {
charset = "!@#$%^&*STUVWXYZ"
min-chars = 1
}
EOF
```
1. Write data using the `example` policy:
```shell-session
$ vault kv put kv/my-generated-secret \
password=$(vault read -field password sys/policies/password/example/generate)
```
1. Read the generated data:
```shell-session
$ vault kv get kv/my-generated-secret
====== Data ======
Key Value
--- -----
password ^dajd609Xf8Zhac$dW24
```
## TTLs
Unlike other secrets engines, the KV secrets engine does not enforce TTLs
for expiration. Instead, the `lease_duration` is a hint for how often consumers
should check back for a new value.
If provided a key of `ttl`, the KV secrets engine will utilize this value
as the lease duration:
```shell-session
$ vault kv put kv/my-secret ttl=30m my-value=s3cr3t
Success! Data written to: kv/my-secret
```
Even with a `ttl` set, the secrets engine _never_ removes data on its own. The
`ttl` key is merely advisory.
When reading a value with a `ttl`, both the `ttl` key _and_ the refresh interval
will reflect the value:
```shell-session
$ vault kv get kv/my-secret
Key Value
--- -----
my-value s3cr3t
ttl 30m
```
## API
The KV secrets engine has a full HTTP API. Please see the
[KV secrets engine API](/vault/api-docs/secret/kv/kv-v1) for more
details. | vault | layout docs page title KV Secrets Engines description The KV secrets engine can store arbitrary secrets KV secrets engine version 1 The kv secrets engine is used to store arbitrary secrets within the configured physical storage for Vault Writing to a key in the kv backend will replace the old value sub fields are not merged together Key names must always be strings If you write non string values directly via the CLI they will be converted into strings However you can preserve non string values by writing the key value pairs to Vault from a JSON file or using the HTTP API This secrets engine honors the distinction between the create and update capabilities inside ACL policies Note Path and key names are not obfuscated or encrypted only the values set on keys are You should not store sensitive information as part of a secret s path Setup To enable a version 1 kv store shell session vault secrets enable version 1 kv Usage After the secrets engine is configured and a user machine has a Vault token with the proper permission it can generate credentials The kv secrets engine allows for writing keys with arbitrary values 1 Write arbitrary data shell session vault kv put kv my secret my value s3cr3t Success Data written to kv my secret 1 Read arbitrary data shell session vault kv get kv my secret Key Value my value s3cr3t 1 List the keys shell session vault kv list kv Keys my secret 1 Delete a key shell session vault kv delete kv my secret Success Data deleted if it existed at kv my secret You can also use Vault s password policy vault docs concepts password policies feature to generate arbitrary values 1 Write a password policy shell session vault write sys policies password example policy EOF length 20 rule charset charset abcdefghij0123456789 min chars 1 rule charset charset STUVWXYZ min chars 1 EOF 1 Write data using the example policy shell session vault kv put kv my generated secret password vault read field password sys policies password example generate 1 Read the generated data shell session vault kv get kv my generated secret Data Key Value password dajd609Xf8Zhac dW24 TTLs Unlike other secrets engines the KV secrets engine does not enforce TTLs for expiration Instead the lease duration is a hint for how often consumers should check back for a new value If provided a key of ttl the KV secrets engine will utilize this value as the lease duration shell session vault kv put kv my secret ttl 30m my value s3cr3t Success Data written to kv my secret Even with a ttl set the secrets engine never removes data on its own The ttl key is merely advisory When reading a value with a ttl both the ttl key and the refresh interval will reflect the value shell session vault kv get kv my secret Key Value my value s3cr3t ttl 30m API The KV secrets engine has a full HTTP API Please see the KV secrets engine API vault api docs secret kv kv v1 for more details |
vault secrets within the configured physical storage for Vault This secrets engine page title KV Secrets Engines KV secrets engine The kv secrets engine is a generic key value store used to store arbitrary layout docs The KV secrets engine can store arbitrary secrets | ---
layout: docs
page_title: KV - Secrets Engines
description: The KV secrets engine can store arbitrary secrets.
---
# KV secrets engine
The `kv` secrets engine is a generic key-value store used to store arbitrary
secrets within the configured physical storage for Vault. This secrets engine
can run in one of two modes; store a single value for a key, or store a number
of versions for each key and maintain the record of them.
## KV version 1
When running the `kv` secrets engine non-versioned, it stores the most recently
written value for a key. Any update will overwrite the original value and not
recoverable. The benefits of non-versioned `kv` is a reduced storage size for
each key since no additional metadata or history is stored. Additionally, it
gives better runtime performance because the requests require fewer storage
calls and no locking.
Refer to the [KV version 1 Docs](/vault/docs/secrets/kv/kv-v1) for more
information.
## KV version 2
When running v2 of the `kv` secrets engine, a key can retain a configurable
number of versions. The default is 10 versions. The older versions' metadata and
data can be retrieved. Additionally, it provides check-and-set operations to
prevent overwriting data unintentionally.
When a version is deleted, the underlying data is not removed, rather it is
marked as deleted. Deleted versions can be undeleted. To permanently remove a
version's data, use the `vault kv destroy` command or the API endpoint. You can
delete all versions and metadata for a key by deleting the metadata using the
`vault kv metadata delete` command or the API endpoint with DELETE verb. You can
restrict who has permissions to soft delete, undelete, or fully remove data with
[Vault policies](/vault/docs/concepts/policies).
Refer to the [KV version 2 Docs](/vault/docs/secrets/kv/kv-v2) for more
information.
## Version comparison
Regardless of its version, you use the [`vault kv`](/vault/docs/commands/kv)
command to interact with KV secrets engine. However, the API endpoint are
different. You must be aware of those differences to write policies as intended.
The following table lists the `vault kv` sub-commands and their respective API
endpoints assuming the KV secrets engine is enabled at `secret/`.
| Command | KV v1 endpoint | KV v2 endpoint |
| ----------------- | ----------------- | ------------------------------ |
| `vault kv get` | secret/<key_path> | secret/**data**/<key_path> |
| `vault kv put` | secret/<key_path> | secret/**data**/<key_path> |
| `vault kv list` | secret/<key_path> | secret/**metadata**/<key_path> |
| `vault kv delete` | secret/<key_path> | secret/**data**/<key_path> |
In addition, KV v2 has sub-commands to handle versioning of secrets.
| Command | KV v2 endpoint |
| ------------------- | ------------------------------ |
| `vault kv patch` | secret/**data**/<key_path> |
| `vault kv rollback` | secret/**data**/<key_path> |
| `vault kv undelete` | secret/**undelete**/<key_path> |
| `vault kv destroy` | secret/**destroy**/<key_path> |
| `vault kv metadata` | secret/**metadata**/<key_path> |
To reduce confusion, the CLI command outputs the secret path when you are
working with KV v2.
**Example:**
<CodeBlockConfig hideClipboard highlight="4">
```shell-session
$ vault kv put secret/web-app api-token="WEOIRJ13895130WENJWEFN"
=== Secret Path ===
secret/data/web-app
======= Metadata =======
Key Value
--- -----
created_time 2024-07-02T00:34:58.074825Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
```
</CodeBlockConfig>
You can use `-mount` flag if omitting `/data/` in the CLI command is confusing.
```shell-session
$ vault kv put -mount=secret web-app api-token="WEOIRJ13895130WENJWEFN"
``` | vault | layout docs page title KV Secrets Engines description The KV secrets engine can store arbitrary secrets KV secrets engine The kv secrets engine is a generic key value store used to store arbitrary secrets within the configured physical storage for Vault This secrets engine can run in one of two modes store a single value for a key or store a number of versions for each key and maintain the record of them KV version 1 When running the kv secrets engine non versioned it stores the most recently written value for a key Any update will overwrite the original value and not recoverable The benefits of non versioned kv is a reduced storage size for each key since no additional metadata or history is stored Additionally it gives better runtime performance because the requests require fewer storage calls and no locking Refer to the KV version 1 Docs vault docs secrets kv kv v1 for more information KV version 2 When running v2 of the kv secrets engine a key can retain a configurable number of versions The default is 10 versions The older versions metadata and data can be retrieved Additionally it provides check and set operations to prevent overwriting data unintentionally When a version is deleted the underlying data is not removed rather it is marked as deleted Deleted versions can be undeleted To permanently remove a version s data use the vault kv destroy command or the API endpoint You can delete all versions and metadata for a key by deleting the metadata using the vault kv metadata delete command or the API endpoint with DELETE verb You can restrict who has permissions to soft delete undelete or fully remove data with Vault policies vault docs concepts policies Refer to the KV version 2 Docs vault docs secrets kv kv v2 for more information Version comparison Regardless of its version you use the vault kv vault docs commands kv command to interact with KV secrets engine However the API endpoint are different You must be aware of those differences to write policies as intended The following table lists the vault kv sub commands and their respective API endpoints assuming the KV secrets engine is enabled at secret Command KV v1 endpoint KV v2 endpoint vault kv get secret key path secret data key path vault kv put secret key path secret data key path vault kv list secret key path secret metadata key path vault kv delete secret key path secret data key path In addition KV v2 has sub commands to handle versioning of secrets Command KV v2 endpoint vault kv patch secret data key path vault kv rollback secret data key path vault kv undelete secret undelete key path vault kv destroy secret destroy key path vault kv metadata secret metadata key path To reduce confusion the CLI command outputs the secret path when you are working with KV v2 Example CodeBlockConfig hideClipboard highlight 4 shell session vault kv put secret web app api token WEOIRJ13895130WENJWEFN Secret Path secret data web app Metadata Key Value created time 2024 07 02T00 34 58 074825Z custom metadata nil deletion time n a destroyed false version 1 CodeBlockConfig You can use mount flag if omitting data in the CLI command is confusing shell session vault kv put mount secret web app api token WEOIRJ13895130WENJWEFN |
vault page title Save random strings layout docs Use password policies and the key value v2 plugins to generate and store Save random strings to the key value v2 plugin random strings in Vault | ---
layout: docs
page_title: Save random strings
description: >-
Use password policies and the key/value v2 plugins to generate and store
random strings in Vault.
---
# Save random strings to the key/value v2 plugin
Use [password policies](/vault/docs/concepts/password-policies) to generate
random strings and save the strings to your key/value v2 plugin.
## Before you start
- **You must have `read`, `create`, and `update` permission for password policies.
- **You must have `create` and `update` permission for your `kv` v2 plugin**.
## Step 1: Create a password policy file
Create an HCL file with a password policy with the desired randomization and
generation rules.
For example, the following password policy requires a string 20 characters long
that includes:
- at least one lowercase character
- at least one uppercase character
- at least one number
- at least two special characters
```hcl
length=20
rule "charset" {
charset = "abcdefghijklmnopqrstuvwxyz"
min-chars = 1
}
rule "charset" {
charset = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
min-chars = 1
}
rule "charset" {
charset = "0123456789"
min-chars = 1
}
rule "charset" {
charset = "!@#$%^&*STUVWXYZ"
min-chars = 2
}
```
## Step 2: Save the password policy
<Tabs>
<Tab heading="CLI" group="cli">
Use `vault write` to save policies to the password policies endpoint
(`sys/policies/password/<policy_name>`):
```shell-session
$ vault write sys/policies/password/<policy_name> policy=@<policy_file>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault write sys/policies/password/randomize [email protected]
Success! Data written to: sys/policies/password/randomize
```
</CodeBlockConfig>
</Tab>
<Tab heading="API" group="api">
Escape your password policy file and make a `POST` call to
[`/sys/policies/password/{policy_name}`](/vault/api-docs/system/policies-password#create-update-password-policy)
with your password creation rules:
```shell-session
$ jq -Rs '{ "policy": . | gsub("[\\r\\n\\t]"; "") }' <path_to_policy_file> |
curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
"$(</dev/stdin)" \
${VAULT_ADDR}/v1/sys/policies/password/<policy_name>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ jq -Rs '{ "policy": . | gsub("[\\r\\n\\t]"; "") }' ./password-rules.hcl |
curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data "$(</dev/stdin)" \
${VAULT_ADDR}/v1/sys/policies/password/randomize | jq
```
</CodeBlockConfig>
`/sys/policies/password/{policy_name}` does not return data on success.
</Tab>
</Tabs>
## Step 3: Save a random string to `kv` v2
<Tabs>
<Tab heading="CLI" group="cli">
Use `vault read` and the `generate` endpoint of the new password policy to
generate a new random string and write it to the `kv` plugin with
`vault kv put`:
```shell-session
$ vault kv put \
-mount <mount_path> \
<secret_path> \
<key_name>=$( \
vault read -field password \
sys/policies/password/<policy_name>/generate \
)
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault kv put \
-mount shared \
/dev/seeds \
seed1=$( \
vault read -field password \
sys/policies/password/randomize/generate \
)
==== Secret Path ====
shared/data/dev/seeds
======= Metadata =======
Key Value
--- -----
created_time 2024-11-15T23:15:31.929717548Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
```
</CodeBlockConfig>
</Tab>
<Tab heading="API" group="api">
Use the
[`/sys/policies/password/{policy_name}/generate`](/vault/api-docs/system/policies-password#generate-password-from-password-policy)
endpoint of the new password policy to generate a random string and write it to
the `kv` plugin with a `POST` call to
[`/{plugin_mount_path}/data/{secret_path}`](/vault/api-docs/secret/kv/kv-v2#create-update-secret):
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data \
"{
\"data\": {
\"<key_name>\": \"$(
vault read -field password sys/policies/password/<policy_name>/generate
)\"
}
}" \
${VAULT_ADDR}/v1/<plugin_mount_path>/data/<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data \
"{
\"data\": {
\"seed1\": \"$(
vault read -field password sys/policies/password/randomize/generate
)\"
}
}" \
${VAULT_ADDR}/v1/shared/data/dev/seeds | jq
{
"request_id": "f9fad221-74e7-72c4-3f5a-9364944c37d9",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": {
"created_time": "2024-11-15T23:33:08.549750507Z",
"custom_metadata": null,
"deletion_time": "",
"destroyed": false,
"version": 1
},
"wrap_info": null,
"warnings": null,
"auth": null,
"mount_type": "kv"
}
```
</CodeBlockConfig>
</Tab>
</Tabs>
## Step 4: Verify the data in Vault
<Tabs>
<Tab heading="CLI" group="cli">
Use [`vault kv get`](/vault/docs/command/kv/read) with the `-field` flag to read
the randomized string from the relevant secret path:
```shell-session
$ vault kv get \
-mount <mount_path> \
-field <field_name> \
<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault kv get -mount shared -field seed1 dev/seeds
g0bc0b6W3ii^SXa@*ie5
```
</CodeBlockConfig>
</Tab>
<Tab heading="GUI" group="gui">
@include 'gui-instructions/plugins/kv/open-overview.mdx'
- Select the **Secret** tab.
- Click the eye icon to view the desired key value.

</Tab>
<Tab heading="API" group="api">
Call the [`/{plugin_mount_path}/data/{secret_path}`](/vault/api-docs/secret/kv/kv-v2#read-secret-version)
endpoint to read all the key/value pairs at the secret path:
```shell-session
$ curl \
--request GET \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
${VAULT_ADDR}/v1/<plugin_mount_path>/data/<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ curl \
--request GET \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
${VAULT_ADDR}/v1/shared/data/dev/seeds | jq
{
"request_id": "c1202e8d-aff9-2d81-0929-4a558a193b4c",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": {
"data": {
"seed1": "g0bc0b6W3ii^SXa@*ie5"
},
"metadata": {
"created_time": "2024-11-15T23:33:08.549750507Z",
"custom_metadata": null,
"deletion_time": "",
"destroyed": false,
"version": 1
}
},
"wrap_info": null,
"warnings": null,
"auth": null,
"mount_type": "kv"
}
```
</CodeBlockConfig>
</Tab>
</Tabs> | vault | layout docs page title Save random strings description Use password policies and the key value v2 plugins to generate and store random strings in Vault Save random strings to the key value v2 plugin Use password policies vault docs concepts password policies to generate random strings and save the strings to your key value v2 plugin Before you start You must have read create and update permission for password policies You must have create and update permission for your kv v2 plugin Step 1 Create a password policy file Create an HCL file with a password policy with the desired randomization and generation rules For example the following password policy requires a string 20 characters long that includes at least one lowercase character at least one uppercase character at least one number at least two special characters hcl length 20 rule charset charset abcdefghijklmnopqrstuvwxyz min chars 1 rule charset charset ABCDEFGHIJKLMNOPQRSTUVWXYZ min chars 1 rule charset charset 0123456789 min chars 1 rule charset charset STUVWXYZ min chars 2 Step 2 Save the password policy Tabs Tab heading CLI group cli Use vault write to save policies to the password policies endpoint sys policies password policy name shell session vault write sys policies password policy name policy policy file For example CodeBlockConfig hideClipboard true shell session vault write sys policies password randomize policy password rules hcl Success Data written to sys policies password randomize CodeBlockConfig Tab Tab heading API group api Escape your password policy file and make a POST call to sys policies password policy name vault api docs system policies password create update password policy with your password creation rules shell session jq Rs policy gsub r n t path to policy file curl request POST header X Vault Token VAULT TOKEN dev stdin VAULT ADDR v1 sys policies password policy name For example CodeBlockConfig hideClipboard true shell session jq Rs policy gsub r n t password rules hcl curl request POST header X Vault Token VAULT TOKEN data dev stdin VAULT ADDR v1 sys policies password randomize jq CodeBlockConfig sys policies password policy name does not return data on success Tab Tabs Step 3 Save a random string to kv v2 Tabs Tab heading CLI group cli Use vault read and the generate endpoint of the new password policy to generate a new random string and write it to the kv plugin with vault kv put shell session vault kv put mount mount path secret path key name vault read field password sys policies password policy name generate For example CodeBlockConfig hideClipboard true shell session vault kv put mount shared dev seeds seed1 vault read field password sys policies password randomize generate Secret Path shared data dev seeds Metadata Key Value created time 2024 11 15T23 15 31 929717548Z custom metadata nil deletion time n a destroyed false version 1 CodeBlockConfig Tab Tab heading API group api Use the sys policies password policy name generate vault api docs system policies password generate password from password policy endpoint of the new password policy to generate a random string and write it to the kv plugin with a POST call to plugin mount path data secret path vault api docs secret kv kv v2 create update secret shell session curl request POST header X Vault Token VAULT TOKEN data data key name vault read field password sys policies password policy name generate VAULT ADDR v1 plugin mount path data secret path For example CodeBlockConfig hideClipboard true shell session curl request POST header X Vault Token VAULT TOKEN data data seed1 vault read field password sys policies password randomize generate VAULT ADDR v1 shared data dev seeds jq request id f9fad221 74e7 72c4 3f5a 9364944c37d9 lease id renewable false lease duration 0 data created time 2024 11 15T23 33 08 549750507Z custom metadata null deletion time destroyed false version 1 wrap info null warnings null auth null mount type kv CodeBlockConfig Tab Tabs Step 4 Verify the data in Vault Tabs Tab heading CLI group cli Use vault kv get vault docs command kv read with the field flag to read the randomized string from the relevant secret path shell session vault kv get mount mount path field field name secret path For example CodeBlockConfig hideClipboard true shell session vault kv get mount shared field seed1 dev seeds g0bc0b6W3ii SXa ie5 CodeBlockConfig Tab Tab heading GUI group gui include gui instructions plugins kv open overview mdx Select the Secret tab Click the eye icon to view the desired key value Partial screenshot of the Vault GUI showing the randomized string stored at the path dev seeds img gui kv random string png Tab Tab heading API group api Call the plugin mount path data secret path vault api docs secret kv kv v2 read secret version endpoint to read all the key value pairs at the secret path shell session curl request GET header X Vault Token VAULT TOKEN VAULT ADDR v1 plugin mount path data secret path For example CodeBlockConfig hideClipboard true shell session curl request GET header X Vault Token VAULT TOKEN VAULT ADDR v1 shared data dev seeds jq request id c1202e8d aff9 2d81 0929 4a558a193b4c lease id renewable false lease duration 0 data data seed1 g0bc0b6W3ii SXa ie5 metadata created time 2024 11 15T23 33 08 549750507Z custom metadata null deletion time destroyed false version 1 wrap info null warnings null auth null mount type kv CodeBlockConfig Tab Tabs |
vault page title Set up the key value v2 plugin secrets in Vault Enable and configure the key value v2 plugins to store arbitrary static layout docs Set up the key value v2 plugin | ---
layout: docs
page_title: Set up the key/value v2 plugin
description: >-
Enable and configure the key/value v2 plugins to store arbitrary static
secrets in Vault.
---
# Set up the key/value v2 plugin
Use `vault secrets enable` to enable an instance of the `kv` plugin. To specify
version 2, use the `-version` flag or specific `kv-v2` as the plugin type:
Additionally, when running a dev-mode server, the v2 `kv` secrets engine is enabled by default at the
path `demo/` (for non-dev servers, it is currently v1). It can be disabled, moved, or enabled multiple times at
different paths. Each instance of the KV secrets engine is isolated and unique.
## Before you start
- **You must have permission to update ACL policies**.
- **You must have permission to enable plugins**.
## Step 1: Enable the plugin
<Tabs>
<Tab heading="CLI" group="cli">
Use `vault secrets enable` to establish a new instance of the `kv` plugin.
To specify key/value version 2, use the `-version` flag or use `kv-v2` as the
plugin type.
**Option 1**: Use the `-version` flag:
```shell-session
$ vault secrets enable -path <mount_path> -version=2 kv
```
**Option 2**: Use the `kv-v2` plugin type:
```shell-session
$ vault secrets enable -path <mount_path> kv-v2
```
</Tab>
<Tab heading="GUI" group="gui">
@include 'gui-instructions/enable-secrets-plugin.mdx'
- Select the "KV" plugin.
- Enter a unique path for the plugin and provide the relevant configuration
data.
</Tab>
<Tab heading="API" group="api">
1. Create a JSON file with the type and configuration information for your `kv`
v2 instance. Use the `options` field to set optional flags.
1. Make a `POST` call to
[`/sys/mounts/{plugin_mount_path}`](/vault/api-docs/system/mounts#enable-secrets-engine)
with the JSON data:
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data @data.json \
${VAULT_ADDR}/v1/sys/mounts/<plugin_mount_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```json
{
"type": "kv",
"options": {
"version": "2"
}
}
```
</CodeBlockConfig>
<CodeBlockConfig hideClipboard="true">
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data @data.json \
${VAULT_ADDR}/v1/sys/mounts/shared | jq
```
</CodeBlockConfig>
`/sys/mounts/{plugin_mount_path}` does not return data on success.
</Tab>
</Tabs>
## Step 2: Create an ACL policy file
<Note>
ACL policies for `kv` plugins do not support the `allowed_parameters`,
`denied_parameters`, and `required_parameters` policy fields.
</Note>
Create a policy definition file based on your needs.
For example, assume there are API keys stored on the path `/dev/square-api` for
a `kv` plugin mounted at `shared/`. The following policy lets clients read and
patch the latest version of API keys and read metadata for any version of the
API keys:
```hcl
# Grants permission to read and patch the latest version of API keys
path "shared/data/dev/square-api/*" {
capabilities = ["read", "patch"]
}
# Grants permission to read metadata for any version of the API keys
path "shared/metadata/dev/square-api/" {
capabilities = ["read"]
}
```
<Tabs>
<Tab heading="Available path prefixes">
@include 'policies/path-prefixes.mdx'
</Tab>
<Tab heading="Available permissions">
@include 'policies/policy-permissions.mdx'
</Tab>
</Tabs>
If you are unsure about the required permissions, use the Vault CLI to run a
command with placeholder data and the `-output-policy` flag against an existing
`kv` plugin to generate a minimal policy.
<CodeBlockConfig highlight="2">
```shell-session
$ vault kv patch \
-output-policy \
-mount <existing_mount_path> \
test-path \
test=test
```
</CodeBlockConfig>
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault kv patch \
-output-policy \
-mount private \
test-path \
test=test
path "private/data/test-path" {
capabilities = ["patch"]
}
```
</CodeBlockConfig>
## Step 3: Save the ACL policy
<Tabs>
<Tab heading="CLI" group="cli">
Use `vault policy write` to create a new ACL policy with the policy definition
file:
```shell-session
$ vault policy write <name> <path_to_policy_file>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault policy write "KV-access-policy" ./kv-policy.hcl
```
</CodeBlockConfig>
</Tab>
<Tab heading="GUI" group="gui">
@include 'gui-instructions/create-acl-policy.mdx'
- Provide a name for the policy and upload the policy definition file.
<Tip>
If you expect to modify policies with the Vault API, avoid spaces and special
characters in the policy name. The policy name becomes part of the API endpoint
path.
</Tip>
</Tab>
<Tab heading="API" group="api">
Escape your policy file and make a `POST` call to
[`/sys/policy/{policy_name}`](/vault/api-docs/system/policy#create-update-policy)
with your policy details:
```shell-session
$ jq -Rs '{ "policy": . | gsub("[\\r\\n\\t]"; "") }' <path_to_policy_file> |
curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
"$(</dev/stdin)" \
${VAULT_ADDR}/v1/sys/policy/<policy_name>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ jq -Rs '{ "policy": . | gsub("[\\r\\n\\t]"; "") }' ./kv-policy.hcl |
curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data "$(</dev/stdin)" \
${VAULT_ADDR}/v1/sys/policy/kv-access | jq
```
</CodeBlockConfig>
`/sys/mounts/{plugin_mount_path}` does not return data on success.
</Tab>
</Tabs>
## Next steps
- [Create an authentication mapping for the plugin](/vault/docs/concepts/policies#associating-policies) | vault | layout docs page title Set up the key value v2 plugin description Enable and configure the key value v2 plugins to store arbitrary static secrets in Vault Set up the key value v2 plugin Use vault secrets enable to enable an instance of the kv plugin To specify version 2 use the version flag or specific kv v2 as the plugin type Additionally when running a dev mode server the v2 kv secrets engine is enabled by default at the path demo for non dev servers it is currently v1 It can be disabled moved or enabled multiple times at different paths Each instance of the KV secrets engine is isolated and unique Before you start You must have permission to update ACL policies You must have permission to enable plugins Step 1 Enable the plugin Tabs Tab heading CLI group cli Use vault secrets enable to establish a new instance of the kv plugin To specify key value version 2 use the version flag or use kv v2 as the plugin type Option 1 Use the version flag shell session vault secrets enable path mount path version 2 kv Option 2 Use the kv v2 plugin type shell session vault secrets enable path mount path kv v2 Tab Tab heading GUI group gui include gui instructions enable secrets plugin mdx Select the KV plugin Enter a unique path for the plugin and provide the relevant configuration data Tab Tab heading API group api 1 Create a JSON file with the type and configuration information for your kv v2 instance Use the options field to set optional flags 1 Make a POST call to sys mounts plugin mount path vault api docs system mounts enable secrets engine with the JSON data shell session curl request POST header X Vault Token VAULT TOKEN data data json VAULT ADDR v1 sys mounts plugin mount path For example CodeBlockConfig hideClipboard true json type kv options version 2 CodeBlockConfig CodeBlockConfig hideClipboard true shell session curl request POST header X Vault Token VAULT TOKEN data data json VAULT ADDR v1 sys mounts shared jq CodeBlockConfig sys mounts plugin mount path does not return data on success Tab Tabs Step 2 Create an ACL policy file Note ACL policies for kv plugins do not support the allowed parameters denied parameters and required parameters policy fields Note Create a policy definition file based on your needs For example assume there are API keys stored on the path dev square api for a kv plugin mounted at shared The following policy lets clients read and patch the latest version of API keys and read metadata for any version of the API keys hcl Grants permission to read and patch the latest version of API keys path shared data dev square api capabilities read patch Grants permission to read metadata for any version of the API keys path shared metadata dev square api capabilities read Tabs Tab heading Available path prefixes include policies path prefixes mdx Tab Tab heading Available permissions include policies policy permissions mdx Tab Tabs If you are unsure about the required permissions use the Vault CLI to run a command with placeholder data and the output policy flag against an existing kv plugin to generate a minimal policy CodeBlockConfig highlight 2 shell session vault kv patch output policy mount existing mount path test path test test CodeBlockConfig For example CodeBlockConfig hideClipboard true shell session vault kv patch output policy mount private test path test test path private data test path capabilities patch CodeBlockConfig Step 3 Save the ACL policy Tabs Tab heading CLI group cli Use vault policy write to create a new ACL policy with the policy definition file shell session vault policy write name path to policy file For example CodeBlockConfig hideClipboard true shell session vault policy write KV access policy kv policy hcl CodeBlockConfig Tab Tab heading GUI group gui include gui instructions create acl policy mdx Provide a name for the policy and upload the policy definition file Tip If you expect to modify policies with the Vault API avoid spaces and special characters in the policy name The policy name becomes part of the API endpoint path Tip Tab Tab heading API group api Escape your policy file and make a POST call to sys policy policy name vault api docs system policy create update policy with your policy details shell session jq Rs policy gsub r n t path to policy file curl request POST header X Vault Token VAULT TOKEN dev stdin VAULT ADDR v1 sys policy policy name For example CodeBlockConfig hideClipboard true shell session jq Rs policy gsub r n t kv policy hcl curl request POST header X Vault Token VAULT TOKEN data dev stdin VAULT ADDR v1 sys policy kv access jq CodeBlockConfig sys mounts plugin mount path does not return data on success Tab Tabs Next steps Create an authentication mapping for the plugin vault docs concepts policies associating policies |
vault Upgrade existing v1 key value plugins to leverage kv v2 features page title Upgrade to key value v2 Upgrade kv version 1 plugins You can upgrade existing version 1 key value stores to version 2 to use layout docs | ---
layout: docs
page_title: Upgrade to key/value v2
description: >-
Upgrade existing v1 key/value plugins to leverage kv v2 features.
---
# Upgrade `kv` version 1 plugins
You can upgrade existing version 1 key/value stores to version 2 to use
versioning.
<Warning>
You cannot access v1 plugin mounts during the upgrade, which may take a long
time for plugins that contain significant data.
</Warning>
## Before you start
- **You must have permission to update ACL policies**.
- **You must have permission to tune the `kv1` v1 plugin**.
## Step 1: Update ACL rules
The `kv` v2 plugin uses different API path prefixes than `kv` v1. You must
upgrade the relevant ACL policies **before** upgrading the plugin by changing
v1 paths for read, write, or update policies to include the v2 path prefix,
`data/`.
For example, the following `kv` v1 policy:
```hcl
path "shared/dev/square-api/*" {
capabilities = ["create", "update", "read"]
}
```
becomes:
```hcl
path "secret/dev/square-api/*" {
capabilities = ["create", "update", "read"]
}
```
<Tip>
You can assign different ACL policies to different `kv` v2 paths.
</Tip>
## Step 2: Upgrade the plugin instance
<Tabs>
<Tab heading="CLI" group="cli">
Use the `enable-versioning` subcommand to upgrade from v1 to v2:
```shell-session
$ vault kv enable-versioning <kv_v1_mount_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault kv enable-versioning shared/
Success! Tuned the secrets engine at: shared/
```
</CodeBlockConfig>
</Tab>
<Tab heading="API" group="api">
Make a `POST` call to
[`/sys/mounts/{plugin_mount_path}`](/vault/api-docs/system/mounts#enable-secrets-engine)
with `options.version` set to `2` to update the plugin version:
```shell-session
$ curl \
--header "X-Vault-Token: ..." \
--request POST \
--data '{\"options\": {\"version\": \"2\"}}' \
http://${VAULT_ADDR}/v1/sys/mounts/${KV_MOUNT_PATH}/tune
```
</Tab>
</Tabs>
## Related resources
- [KV v2 plugin API docs](/vault/api-docs/secret/kv/kv-v2)
- [Tutorial: Versioned Key Value Secrets Engine](/vault/tutorials/secrets-management/versioned-kv) -
Learn how to compare data in the KV v2 secrets engine and protect data from
accidental deletion | vault | layout docs page title Upgrade to key value v2 description Upgrade existing v1 key value plugins to leverage kv v2 features Upgrade kv version 1 plugins You can upgrade existing version 1 key value stores to version 2 to use versioning Warning You cannot access v1 plugin mounts during the upgrade which may take a long time for plugins that contain significant data Warning Before you start You must have permission to update ACL policies You must have permission to tune the kv1 v1 plugin Step 1 Update ACL rules The kv v2 plugin uses different API path prefixes than kv v1 You must upgrade the relevant ACL policies before upgrading the plugin by changing v1 paths for read write or update policies to include the v2 path prefix data For example the following kv v1 policy hcl path shared dev square api capabilities create update read becomes hcl path secret dev square api capabilities create update read Tip You can assign different ACL policies to different kv v2 paths Tip Step 2 Upgrade the plugin instance Tabs Tab heading CLI group cli Use the enable versioning subcommand to upgrade from v1 to v2 shell session vault kv enable versioning kv v1 mount path For example CodeBlockConfig hideClipboard true shell session vault kv enable versioning shared Success Tuned the secrets engine at shared CodeBlockConfig Tab Tab heading API group api Make a POST call to sys mounts plugin mount path vault api docs system mounts enable secrets engine with options version set to 2 to update the plugin version shell session curl header X Vault Token request POST data options version 2 http VAULT ADDR v1 sys mounts KV MOUNT PATH tune Tab Tabs Related resources KV v2 plugin API docs vault api docs secret kv kv v2 Tutorial Versioned Key Value Secrets Engine vault tutorials secrets management versioned kv Learn how to compare data in the KV v2 secrets engine and protect data from accidental deletion |
vault Restore soft deleted key value data page title Restore soft deleted data You can restore data from soft deletes in the kv v2 plugin as long as the Revert soft deletes to restore versioned key value data in the kv v2 plugin layout docs | ---
layout: docs
page_title: Restore soft deleted data
description: >-
Revert soft deletes to restore versioned key/value data in the kv v2 plugin.
---
# Restore soft deleted key/value data
You can restore data from soft deletes in the `kv` v2 plugin as long as the
`destroyed` metadata field for the targeted version is `false`.
<Tip title="Assumptions">
- You have [set up a `kv` v2 plugin](/vault/docs/secrets/kv/kv-v2/setup).
- Your authentication token has `create` and `update` permissions for the `kv`
v2 plugin.
</Tip>
<Tabs>
<Tab heading="CLI" group="cli">
Use [`vault kv undelete`](/vault/docs/command/kv/undelete) with the `-versions`
flag to restore soft deleted version of key/value data:
```shell-session
$ vault kv undelete \
-mount <mount_path> \
-versions <target_versions> \
<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault kv undelete -mount shared -versions 1,4 dev/square-api
Success! Data deleted (if it existed) at: shared/data/dev/square-api
```
</CodeBlockConfig>
The `deletion_time` metadata field for versions 1 and 4 is now `n/a`:
<CodeBlockConfig hideClipboard="true" highlight="22,31">
```shell-session
$ vault kv metadata get -mount shared dev/square-api
======== Metadata Path ========
shared/metadata/dev/square-api
========== Metadata ==========
Key Value
--- -----
cas_required false
created_time 2024-11-13T21:51:50.898782695Z
current_version 4
custom_metadata <nil>
delete_version_after 0s
max_versions 5
oldest_version 0
updated_time 2024-11-14T22:32:42.29534643Z
====== Version 1 ======
Key Value
--- -----
created_time 2024-11-13T21:51:50.898782695Z
deletion_time n/a
destroyed false
...
====== Version 4 ======
Key Value
--- -----
created_time 2024-11-14T22:32:42.29534643Z
deletion_time n/a
destroyed false
```
</CodeBlockConfig>
</Tab>
<Tab heading="GUI" group="gui">
@include 'gui-instructions/plugins/kv/open-overview.mdx'
- Select the **Secret** tab.
- Select the appropriate data version from the **Version** dropdown.
- Click **Undelete**.

</Tab>
<Tab heading="API" group="api">
Make a `POST` call to
[`/{plugin_mount_path}/undelete/{secret_path}`](/vault/api-docs/secret/kv/kv-v2#undelete-secret-versions)
with the data versions you want to restore:
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data '{"versions":[<target_versions>]} \
${VAULT_ADDR}/v1/<plugin_mount_path>/undelete/<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data '{"versions":[5,8]}' \
${VAULT_ADDR}/v1/shared/undelete/dev/square-api | jq
```
`/{plugin_mount_path}/undelete/{secret_path}` does not return data on success.
</CodeBlockConfig>
</Tab>
</Tabs | vault | layout docs page title Restore soft deleted data description Revert soft deletes to restore versioned key value data in the kv v2 plugin Restore soft deleted key value data You can restore data from soft deletes in the kv v2 plugin as long as the destroyed metadata field for the targeted version is false Tip title Assumptions You have set up a kv v2 plugin vault docs secrets kv kv v2 setup Your authentication token has create and update permissions for the kv v2 plugin Tip Tabs Tab heading CLI group cli Use vault kv undelete vault docs command kv undelete with the versions flag to restore soft deleted version of key value data shell session vault kv undelete mount mount path versions target versions secret path For example CodeBlockConfig hideClipboard true shell session vault kv undelete mount shared versions 1 4 dev square api Success Data deleted if it existed at shared data dev square api CodeBlockConfig The deletion time metadata field for versions 1 and 4 is now n a CodeBlockConfig hideClipboard true highlight 22 31 shell session vault kv metadata get mount shared dev square api Metadata Path shared metadata dev square api Metadata Key Value cas required false created time 2024 11 13T21 51 50 898782695Z current version 4 custom metadata nil delete version after 0s max versions 5 oldest version 0 updated time 2024 11 14T22 32 42 29534643Z Version 1 Key Value created time 2024 11 13T21 51 50 898782695Z deletion time n a destroyed false Version 4 Key Value created time 2024 11 14T22 32 42 29534643Z deletion time n a destroyed false CodeBlockConfig Tab Tab heading GUI group gui include gui instructions plugins kv open overview mdx Select the Secret tab Select the appropriate data version from the Version dropdown Click Undelete Partial screenshot of the Vault GUI showing the deleted version message img gui kv undelete data png Tab Tab heading API group api Make a POST call to plugin mount path undelete secret path vault api docs secret kv kv v2 undelete secret versions with the data versions you want to restore shell session curl request POST header X Vault Token VAULT TOKEN data versions target versions VAULT ADDR v1 plugin mount path undelete secret path For example CodeBlockConfig hideClipboard true shell session curl request POST header X Vault Token VAULT TOKEN data versions 5 8 VAULT ADDR v1 shared undelete dev square api jq plugin mount path undelete secret path does not return data on success CodeBlockConfig Tab Tabs |
vault page title Read data Read versioned key value data from the kv v2 plugin Read versioned data from an existing data path in the kv v2 plugin layout docs Read versioned key value data | ---
layout: docs
page_title: Read data
description: >-
Read versioned key/value data from the kv v2 plugin
---
# Read versioned key/value data
Read versioned data from an existing data path in the `kv` v2 plugin.
<Tip title="Assumptions">
- You have [set up a `kv` v2 plugin](/vault/docs/secrets/kv/kv-v2/setup).
- Your authentication token has `read` permissions for the `kv` v2 plugin.
</Tip>
<Tabs>
<Tab heading="CLI" group="cli">
Use [`vault kv get`](/vault/docs/command/kv/read) to read **all** the current
key/value pairs on the given path:
```shell-session
$ vault kv get \
-mount <mount_path> \
<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault kv get -mount shared dev/square-api
======= Secret Path =======
shared/data/dev/square-api
======= Metadata =======
Key Value
--- -----
created_time 2024-11-13T21:58:32.128442898Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 3
===== Data =====
Key Value
--- -----
prod 5678
sandbox 1234
```
</CodeBlockConfig>
Use the `-field` flag to target specific key value pairs on the given path:
```shell-session
$ vault kv get \
-mount <mount_path> \
-field <field_name> \
<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault kv get -mount shared -field prod dev/square-api
5678
```
</CodeBlockConfig>
</Tab>
<Tab heading="GUI" group="gui">
@include 'gui-instructions/plugins/kv/open-overview.mdx'
- Select the **Secret** tab.
- Click the eye icon to view the desired key value.

</Tab>
<Tab heading="API" group="api">
Call the [`/{plugin_mount_path}/data/{secret_path}`](/vault/api-docs/secret/kv/kv-v2#read-secret-version)
endpoint to read all the key/value pairs at the secret path:
```shell-session
$ curl \
--request GET \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
${VAULT_ADDR}/v1/<plugin_mount_path>/data/<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ curl \
--request GET \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
${VAULT_ADDR}/v1/shared/data/dev/square-api | jq
{
"request_id": "992da4a2-f2d1-5786-ea53-1e8ea6440a7c",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": {
"data": {
"prod": "5679",
"sandbox": "1234",
"smoke": "abcd"
},
"metadata": {
"created_time": "2024-11-15T02:41:02.556301319Z",
"custom_metadata": null,
"deletion_time": "",
"destroyed": false,
"version": 7
}
},
"wrap_info": null,
"warnings": null,
"auth": null,
"mount_type": "kv"
}
```
</CodeBlockConfig>
</Tab>
</Tabs> | vault | layout docs page title Read data description Read versioned key value data from the kv v2 plugin Read versioned key value data Read versioned data from an existing data path in the kv v2 plugin Tip title Assumptions You have set up a kv v2 plugin vault docs secrets kv kv v2 setup Your authentication token has read permissions for the kv v2 plugin Tip Tabs Tab heading CLI group cli Use vault kv get vault docs command kv read to read all the current key value pairs on the given path shell session vault kv get mount mount path secret path For example CodeBlockConfig hideClipboard true shell session vault kv get mount shared dev square api Secret Path shared data dev square api Metadata Key Value created time 2024 11 13T21 58 32 128442898Z custom metadata nil deletion time n a destroyed false version 3 Data Key Value prod 5678 sandbox 1234 CodeBlockConfig Use the field flag to target specific key value pairs on the given path shell session vault kv get mount mount path field field name secret path For example CodeBlockConfig hideClipboard true shell session vault kv get mount shared field prod dev square api 5678 CodeBlockConfig Tab Tab heading GUI group gui include gui instructions plugins kv open overview mdx Select the Secret tab Click the eye icon to view the desired key value Partial screenshot of the Vault GUI showing two key value pairs at the path dev square api The prod key is visible img gui kv read data png Tab Tab heading API group api Call the plugin mount path data secret path vault api docs secret kv kv v2 read secret version endpoint to read all the key value pairs at the secret path shell session curl request GET header X Vault Token VAULT TOKEN VAULT ADDR v1 plugin mount path data secret path For example CodeBlockConfig hideClipboard true shell session curl request GET header X Vault Token VAULT TOKEN VAULT ADDR v1 shared data dev square api jq request id 992da4a2 f2d1 5786 ea53 1e8ea6440a7c lease id renewable false lease duration 0 data data prod 5679 sandbox 1234 smoke abcd metadata created time 2024 11 15T02 41 02 556301319Z custom metadata null deletion time destroyed false version 7 wrap info null warnings null auth null mount type kv CodeBlockConfig Tab Tabs |
vault Soft delete key value data Use soft deletes to control the lifecycle of versioned key value data in the page title Soft delete data layout docs kv v2 plugin | ---
layout: docs
page_title: Soft delete data
description: >-
Use soft deletes to control the lifecycle of versioned key/value data in the
kv v2 plugin.
---
# Soft delete key/value data
Use soft deletes to flag data at a secret path as unavailable while leaving the
data recoverable. You can revert soft deletes as long as the `destroyed` field
is `false` in the metadata.
<Tip title="Assumptions">
- You have [set up a `kv` v2 plugin](/vault/docs/secrets/kv/kv-v2/setup).
- Your authentication token has `create` and `update` permissions for the `kv`
v2 plugin.
</Tip>
<Tabs>
<Tab heading="CLI" group="cli">
Use [`vault kv delete`](/vault/docs/command/kv/delete) with the `-versions` flag to
soft delete one or more version of key/value data and set `deletion_time` in the
metadata:
```shell-session
$ vault kv delete \
-mount <mount_path> \
-versions <target_versions> \
<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault kv delete -mount shared -versions 1,4 dev/square-api
Success! Data deleted (if it existed) at: shared/data/dev/square-api
```
</CodeBlockConfig>
The `deletion_time` metadata field for versions 1 and 4 now has the timestamp
of when Vault marked the versions as deleted:
<CodeBlockConfig hideClipboard="true" highlight="22,31">
```shell-session
$ vault kv metadata get -mount shared dev/square-api
======== Metadata Path ========
shared/metadata/dev/square-api
========== Metadata ==========
Key Value
--- -----
cas_required false
created_time 2024-11-13T21:51:50.898782695Z
current_version 4
custom_metadata <nil>
delete_version_after 0s
max_versions 5
oldest_version 0
updated_time 2024-11-14T22:32:42.29534643Z
====== Version 1 ======
Key Value
--- -----
created_time 2024-11-13T21:51:50.898782695Z
deletion_time 2024-11-15T00:45:04.057772212Z
destroyed false
...
====== Version 4 ======
Key Value
--- -----
created_time 2024-11-14T22:32:42.29534643Z
deletion_time 2024-11-15T00:45:04.057772712Z
destroyed false
```
</CodeBlockConfig>
</Tab>
<Tab heading="GUI" group="gui">
@include 'gui-instructions/plugins/kv/open-overview.mdx'
- Select the **Secret** tab.
- Select the appropriate data version from the **Version** dropdown.
- Click **Delete**.
- Select **Delete this version** to delete the selected version or
**Delete latest version** to delete the most recent data.
- Click **Confirm**.

</Tab>
<Tab heading="API" group="api">
Make a `POST` call to
[`/{plugin_mount_path}/delete/{secret_path}`](/vault/api-docs/secret/kv/kv-v2#delete-secret-versions)
with the data versions you want to soft delete:
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data '{"versions":[<target_versions>]} \
${VAULT_ADDR}/v1/<plugin_mount_path>/delete/<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data '{"versions":[5,8]}' \
${VAULT_ADDR}/v1/shared/delete/dev/square-api | jq
```
`/{plugin_mount_path}/delete/{secret_path}` does not return data on success.
</CodeBlockConfig>
</Tab>
</Tabs | vault | layout docs page title Soft delete data description Use soft deletes to control the lifecycle of versioned key value data in the kv v2 plugin Soft delete key value data Use soft deletes to flag data at a secret path as unavailable while leaving the data recoverable You can revert soft deletes as long as the destroyed field is false in the metadata Tip title Assumptions You have set up a kv v2 plugin vault docs secrets kv kv v2 setup Your authentication token has create and update permissions for the kv v2 plugin Tip Tabs Tab heading CLI group cli Use vault kv delete vault docs command kv delete with the versions flag to soft delete one or more version of key value data and set deletion time in the metadata shell session vault kv delete mount mount path versions target versions secret path For example CodeBlockConfig hideClipboard true shell session vault kv delete mount shared versions 1 4 dev square api Success Data deleted if it existed at shared data dev square api CodeBlockConfig The deletion time metadata field for versions 1 and 4 now has the timestamp of when Vault marked the versions as deleted CodeBlockConfig hideClipboard true highlight 22 31 shell session vault kv metadata get mount shared dev square api Metadata Path shared metadata dev square api Metadata Key Value cas required false created time 2024 11 13T21 51 50 898782695Z current version 4 custom metadata nil delete version after 0s max versions 5 oldest version 0 updated time 2024 11 14T22 32 42 29534643Z Version 1 Key Value created time 2024 11 13T21 51 50 898782695Z deletion time 2024 11 15T00 45 04 057772212Z destroyed false Version 4 Key Value created time 2024 11 14T22 32 42 29534643Z deletion time 2024 11 15T00 45 04 057772712Z destroyed false CodeBlockConfig Tab Tab heading GUI group gui include gui instructions plugins kv open overview mdx Select the Secret tab Select the appropriate data version from the Version dropdown Click Delete Select Delete this version to delete the selected version or Delete latest version to delete the most recent data Click Confirm Partial screenshot of the Vault GUI showing the Delete version confirmation modal for data at the path dev square api img gui kv delete version png Tab Tab heading API group api Make a POST call to plugin mount path delete secret path vault api docs secret kv kv v2 delete secret versions with the data versions you want to soft delete shell session curl request POST header X Vault Token VAULT TOKEN data versions target versions VAULT ADDR v1 plugin mount path delete secret path For example CodeBlockConfig hideClipboard true shell session curl request POST header X Vault Token VAULT TOKEN data versions 5 8 VAULT ADDR v1 shared delete dev square api jq plugin mount path delete secret path does not return data on success CodeBlockConfig Tab Tabs |
vault The standard vault kv delete command performs soft deletes Use the CLI or GUI page title Permanently delete data Permanently delete versioned key value data in the kv v2 plugin layout docs Destroy key value data | ---
layout: docs
page_title: Permanently delete data
description: >-
Permanently delete versioned key/value data in the kv v2 plugin.
---
# Destroy key/value data
The standard `vault kv delete` command performs soft deletes. Use the CLI or GUI
to permanently delete (destroy) data so Vault purges the underlying data and
sets the `destroyed` metadata field to `true`.
<Tip title="Assumptions">
- You have [set up a `kv` v2 plugin](/vault/docs/secrets/kv/kv-v2/setup).
- Your authentication token has `create` and `update` permissions for the `kv`
v2 plugin.
</Tip>
<Tabs>
<Tab heading="CLI" group="cli">
Use [`vault kv destroy`](/vault/docs/command/kv/destroy) with the `-versions` flag to
permanently delete one or more version of key/value data:
```shell-session
$ vault kv destroy \
-mount <mount_path> \
-versions <target_versions> \
<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault kv destroy -mount shared -versions 2,3 dev/square-api
Success! Data written to: shared/destroy/dev/square-api
```
</CodeBlockConfig>
The `destroyed` metadata field for versions 2 and 3 is now `true`
<CodeBlockConfig hideClipboard="true" highlight="25,32">
```shell-session
$ vault kv metadata get -mount shared dev/square-api
======== Metadata Path ========
shared/metadata/dev/square-api
========== Metadata ==========
Key Value
--- -----
cas_required false
created_time 2024-11-13T21:51:50.898782695Z
current_version 4
custom_metadata <nil>
delete_version_after 0s
max_versions 5
oldest_version 0
updated_time 2024-11-14T22:32:42.29534643Z
...
====== Version 2 ======
Key Value
--- -----
created_time 2024-11-13T21:52:10.326204209Z
deletion_time n/a
destroyed true
====== Version 3 ======
Key Value
--- -----
created_time 2024-11-13T21:58:32.128442898Z
deletion_time n/a
destroyed true
```
</CodeBlockConfig>
</Tab>
<Tab heading="GUI" group="gui">
@include 'gui-instructions/plugins/kv/open-overview.mdx'
- Select the **Secret** tab.
- Select the appropriate data version from the **Version** dropdown.
- Click **Destroy**.
- Click **Confirm**.

</Tab>
<Tab heading="API" group="api">
Make a `POST` call to
[`/{plugin_mount_path}/destroy/{secret_path}`](/vault/api-docs/secret/kv/kv-v2#destroy-secret-versions)
with the data versions you want to destroy:
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data '{"versions":[<target_versions>]} \
${VAULT_ADDR}/v1/<plugin_mount_path>/destroy/<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data '{"versions":[4,7]}' \
${VAULT_ADDR}/v1/shared/destroy/dev/square-api | jq
```
`/{plugin_mount_path}/destroy/{secret_path}` does not return data on success.
</CodeBlockConfig>
</Tab>
</Tabs> | vault | layout docs page title Permanently delete data description Permanently delete versioned key value data in the kv v2 plugin Destroy key value data The standard vault kv delete command performs soft deletes Use the CLI or GUI to permanently delete destroy data so Vault purges the underlying data and sets the destroyed metadata field to true Tip title Assumptions You have set up a kv v2 plugin vault docs secrets kv kv v2 setup Your authentication token has create and update permissions for the kv v2 plugin Tip Tabs Tab heading CLI group cli Use vault kv destroy vault docs command kv destroy with the versions flag to permanently delete one or more version of key value data shell session vault kv destroy mount mount path versions target versions secret path For example CodeBlockConfig hideClipboard true shell session vault kv destroy mount shared versions 2 3 dev square api Success Data written to shared destroy dev square api CodeBlockConfig The destroyed metadata field for versions 2 and 3 is now true CodeBlockConfig hideClipboard true highlight 25 32 shell session vault kv metadata get mount shared dev square api Metadata Path shared metadata dev square api Metadata Key Value cas required false created time 2024 11 13T21 51 50 898782695Z current version 4 custom metadata nil delete version after 0s max versions 5 oldest version 0 updated time 2024 11 14T22 32 42 29534643Z Version 2 Key Value created time 2024 11 13T21 52 10 326204209Z deletion time n a destroyed true Version 3 Key Value created time 2024 11 13T21 58 32 128442898Z deletion time n a destroyed true CodeBlockConfig Tab Tab heading GUI group gui include gui instructions plugins kv open overview mdx Select the Secret tab Select the appropriate data version from the Version dropdown Click Destroy Click Confirm Partial screenshot of the Vault GUI showing the Destroy version confirmation modal for data at the path dev square api img gui kv destroy version png Tab Tab heading API group api Make a POST call to plugin mount path destroy secret path vault api docs secret kv kv v2 destroy secret versions with the data versions you want to destroy shell session curl request POST header X Vault Token VAULT TOKEN data versions target versions VAULT ADDR v1 plugin mount path destroy secret path For example CodeBlockConfig hideClipboard true shell session curl request POST header X Vault Token VAULT TOKEN data versions 4 7 VAULT ADDR v1 shared destroy dev square api jq plugin mount path destroy secret path does not return data on success CodeBlockConfig Tab Tabs |
vault Use max versions to automatically destroy older data versions in the kv v2 Set max data versions in key value v2 layout docs page title Set max data versions plugin | ---
layout: docs
page_title: Set max data versions
description: >-
Use max-versions to automatically destroy older data versions in the kv v2
plugin.
---
# Set max data versions in key/value v2
Limit the number of active versions for a `kv` v2 secret path so Vault
permanently deletes (destroys) older data versions automatically.
<Tip title="Assumptions">
- You have [set up a `kv` v2 plugin](/vault/docs/secrets/kv/kv-v2/setup).
- Your authentication token has `create` and `update` permissions for the `kv`
v2 plugin.
</Tip>
<Tabs>
<Tab heading="CLI" group="cli">
Use [`vault kv metadata put`](/vault/docs/command/kv/metadata) to change the max
number of versions allowed for a `kv` mount path:
```shell-session
$ vault kv metadata put \
-max-versions <max_versions> \
-mount <mount_path> \
<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault kv metadata put \
-max-versions 5 \
-mount shared \
dev/square-api
Success! Data written to: shared/metadata/dev/square-api
```
</CodeBlockConfig>
Vault now auto-deletes data when the number of versions exceeds 5:
<CodeBlockConfig hideClipboard="true" highlight="14">
```shell-session
$ vault kv metadata get -mount shared dev/square-api
======== Metadata Path ========
shared/metadata/dev/square-api
========== Metadata ==========
Key Value
--- -----
cas_required false
created_time 2024-11-13T21:51:50.898782695Z
current_version 4
custom_metadata <nil>
delete_version_after 0s
max_versions 5
oldest_version 0
updated_time 2024-11-14T22:32:42.29534643Z
====== Version 1 ======
Key Value
--- -----
created_time 2024-11-13T21:51:50.898782695Z
deletion_time n/a
destroyed false
```
</CodeBlockConfig>
</Tab>
<Tab heading="GUI" group="gui">
@include 'gui-instructions/plugins/kv/open-overview.mdx'
- Select the **Metadata** tab.
- Click **Edit metadata >**.
- Update the **Maximum number of versions** field.
- Click **Update**.

</Tab>
<Tab heading="API" group="api">
1. Create a JSON file with the metadata field `max_versions` set to the maximum
number of versions you want to allow.
1. Make a `POST` call to
[`/{plugin_mount_path}/metadata/{secret_path}`](/vault/api-docs/secret/kv/kv-v2#create-update-metadata)
with the JSON data file:
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data @metadata.json \
${VAULT_ADDR}/v1/<plugin_mount_path>/metadata/<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```json
{
"max_versions": 10
}
```
</CodeBlockConfig>
<CodeBlockConfig hideClipboard="true">
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data @metadata.json \
${VAULT_ADDR}/v1/shared/metadata/dev/square-api
```
`/{plugin_mount_path}/metadata/{secret_path}` does not return data on success.
</CodeBlockConfig>
</Tab>
</Tabs | vault | layout docs page title Set max data versions description Use max versions to automatically destroy older data versions in the kv v2 plugin Set max data versions in key value v2 Limit the number of active versions for a kv v2 secret path so Vault permanently deletes destroys older data versions automatically Tip title Assumptions You have set up a kv v2 plugin vault docs secrets kv kv v2 setup Your authentication token has create and update permissions for the kv v2 plugin Tip Tabs Tab heading CLI group cli Use vault kv metadata put vault docs command kv metadata to change the max number of versions allowed for a kv mount path shell session vault kv metadata put max versions max versions mount mount path secret path For example CodeBlockConfig hideClipboard true shell session vault kv metadata put max versions 5 mount shared dev square api Success Data written to shared metadata dev square api CodeBlockConfig Vault now auto deletes data when the number of versions exceeds 5 CodeBlockConfig hideClipboard true highlight 14 shell session vault kv metadata get mount shared dev square api Metadata Path shared metadata dev square api Metadata Key Value cas required false created time 2024 11 13T21 51 50 898782695Z current version 4 custom metadata nil delete version after 0s max versions 5 oldest version 0 updated time 2024 11 14T22 32 42 29534643Z Version 1 Key Value created time 2024 11 13T21 51 50 898782695Z deletion time n a destroyed false CodeBlockConfig Tab Tab heading GUI group gui include gui instructions plugins kv open overview mdx Select the Metadata tab Click Edit metadata Update the Maximum number of versions field Click Update Partial screenshot of the Vault GUI showing the Edit Secret Metadata screen img gui kv edit metadata png Tab Tab heading API group api 1 Create a JSON file with the metadata field max versions set to the maximum number of versions you want to allow 1 Make a POST call to plugin mount path metadata secret path vault api docs secret kv kv v2 create update metadata with the JSON data file shell session curl request POST header X Vault Token VAULT TOKEN data metadata json VAULT ADDR v1 plugin mount path metadata secret path For example CodeBlockConfig hideClipboard true json max versions 10 CodeBlockConfig CodeBlockConfig hideClipboard true shell session curl request POST header X Vault Token VAULT TOKEN data metadata json VAULT ADDR v1 shared metadata dev square api plugin mount path metadata secret path does not return data on success CodeBlockConfig Tab Tabs |
vault page title Read subkeys Read the available subkeys on an existing data path in the kv v2 plugin layout docs Read subkeys for a key value data path Read the available subkeys on a given path from the kv v2 plugin | ---
layout: docs
page_title: Read subkeys
description: >-
Read the available subkeys on a given path from the kv v2 plugin
---
# Read subkeys for a key/value data path
Read the available subkeys on an existing data path in the `kv` v2 plugin.
<Tip title="Assumptions">
- You have [set up a `kv` v2 plugin](/vault/docs/secrets/kv/kv-v2/setup).
- Your authentication token has `read` permissions for subkeys on the target
secret path.
</Tip>
<Tabs>
<Tab heading="CLI" group="cli">
Use `vault read` with the `/subkeys` path to retrieve a list of secret data
subkeys at the given path.
```shell-session
$ vault read <mount_path>/subkeys/<secret_path>
```
Vault retrieves secrets at the given path but replaces the underlying values of
non-map keys and map keys with no underlying subkeys (leaf keys) with `nil`.
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault read shared/subkeys/dev/square-api
Key Value
--- -----
metadata map[created_time:2024-11-20T20:00:13.385182722Z custom_metadata:<nil> deletion_time: destroyed:false version:1]
subkeys map[prod:<nil> sandbox:<nil> smoke:<nil>]
```
</CodeBlockConfig>
</Tab>
<Tab heading="GUI" group="gui">
@include 'alerts/enterprise-only.mdx'
@include 'gui-instructions/plugins/kv/open-overview.mdx'
You can read a list of available subkeys for the target path in the **Subkeys**
card.

</Tab>
<Tab heading="API" group="api">
Call the [`/{plugin_mount_path}/subkeys/{secret_path}`](/vault/api-docs/secret/kv/kv-v2#read-secret-subkeys)
endpoint to fetch a list of available subkeys on the given path:
```shell-session
$ curl \
--request GET \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
${VAULT_ADDR}/v1/<plugin_mount_path>/subkeys/<secret_path>
```
Vault retrieves secrets at the given path but replaces the underlying values of
non-map keys and map keys with no underlying subkeys (leaf keys) with `null`.
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ curl \
--request GET \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
${VAULT_ADDR}/v1/shared/subkeys/dev/square-api | jq
{
"request_id": "bfeac3c5-f4dc-37b2-8909-3b15121cfd20",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": {
"metadata": {
"created_time": "2024-11-20T20:00:13.385182722Z",
"custom_metadata": null,
"deletion_time": "",
"destroyed": false,
"version": 11
},
"subkeys": {
"prod": null,
"sandbox": null,
"smoke": null
}
},
"wrap_info": null,
"warnings": null,
"auth": null,
"mount_type": "kv"
}
```
</CodeBlockConfig>
</Tab>
</Tabs> | vault | layout docs page title Read subkeys description Read the available subkeys on a given path from the kv v2 plugin Read subkeys for a key value data path Read the available subkeys on an existing data path in the kv v2 plugin Tip title Assumptions You have set up a kv v2 plugin vault docs secrets kv kv v2 setup Your authentication token has read permissions for subkeys on the target secret path Tip Tabs Tab heading CLI group cli Use vault read with the subkeys path to retrieve a list of secret data subkeys at the given path shell session vault read mount path subkeys secret path Vault retrieves secrets at the given path but replaces the underlying values of non map keys and map keys with no underlying subkeys leaf keys with nil For example CodeBlockConfig hideClipboard true shell session vault read shared subkeys dev square api Key Value metadata map created time 2024 11 20T20 00 13 385182722Z custom metadata nil deletion time destroyed false version 1 subkeys map prod nil sandbox nil smoke nil CodeBlockConfig Tab Tab heading GUI group gui include alerts enterprise only mdx include gui instructions plugins kv open overview mdx You can read a list of available subkeys for the target path in the Subkeys card Partial screenshot of the Vault GUI showing subkeys prod and sandbox for secret data at path dev square api img gui kv overview page png Tab Tab heading API group api Call the plugin mount path subkeys secret path vault api docs secret kv kv v2 read secret subkeys endpoint to fetch a list of available subkeys on the given path shell session curl request GET header X Vault Token VAULT TOKEN VAULT ADDR v1 plugin mount path subkeys secret path Vault retrieves secrets at the given path but replaces the underlying values of non map keys and map keys with no underlying subkeys leaf keys with null For example CodeBlockConfig hideClipboard true shell session curl request GET header X Vault Token VAULT TOKEN VAULT ADDR v1 shared subkeys dev square api jq request id bfeac3c5 f4dc 37b2 8909 3b15121cfd20 lease id renewable false lease duration 0 data metadata created time 2024 11 20T20 00 13 385182722Z custom metadata null deletion time destroyed false version 11 subkeys prod null sandbox null smoke null wrap info null warnings null auth null mount type kv CodeBlockConfig Tab Tabs |
vault Write custom metadata fields to your kv v2 plugin page title Write custom metadata layout docs Write custom metadata in key value v2 Write custom metadata to a kv v2 secret path | ---
layout: docs
page_title: Write custom metadata
description: >-
Write custom metadata fields to your kv v2 plugin.
---
# Write custom metadata in key/value v2
Write custom metadata to a `kv` v2 secret path.
<Tip title="Assumptions">
- You have [set up a `kv` v2 plugin](/vault/docs/secrets/kv/kv-v2/setup).
- Your authentication token has `create` and `update` permissions for the `kv`
v2 plugin.
</Tip>
<Tabs>
<Tab heading="CLI" group="cli">
Use [`vault kv metadata put`](/vault/docs/command/kv/metadata) to set custom
metadata fields for a `kv` mount path. Repeat the `-custom-metadata` flag for
each key/value metadata entry:
```shell-session
$ vault kv metadata put \
-custom-metadata <key_value_pair> \
-mount <mount_path> \
<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault kv metadata put \
-custom-metadata "use=API keys for different dev environments" \
-custom-metadata "renew-date=2026-11-14" \
-mount shared \
dev/square-api
Success! Data written to: shared/metadata/dev/square-api
```
</CodeBlockConfig>
The `custom_metadata` metadata field now includes a map with the two custom
fields:
<CodeBlockConfig hideClipboard="true" highlight="14">
```shell-session
$ vault kv metadata get -mount shared dev/square-api
======== Metadata Path ========
shared/metadata/dev/square-api
========== Metadata ==========
Key Value
--- -----
cas_required false
created_time 2024-11-13T21:51:50.898782695Z
current_version 9
custom_metadata map[use:API keys for different dev environments renew-date:2026-11-14]
delete_version_after 0s
max_versions 10
oldest_version 4
updated_time 2024-11-15T03:10:26.749233814Z
====== Version 1 ======
Key Value
--- -----
created_time 2024-11-13T21:51:50.898782695Z
deletion_time n/a
destroyed false
```
</CodeBlockConfig>
</Tab>
<Tab heading="GUI" group="gui">
@include 'gui-instructions/plugins/kv/open-overview.mdx'
- Select the **Metadata** tab.
- Click **Edit metadata >**.
- Set a new key name and value under **Custom metadata**.
- Use the **Add** button to set additional key/value pairs.
- Click **Update**.

</Tab>
<Tab heading="API" group="api">
1. Create a JSON file with the metadata you want to write to the your `kv` v2
plugin. Use the `custom_metadata` field to define the custom metadata fields
and initial values.
1. Make a `POST` call to
[`/{plugin_mount_path}/metadata/{secret_path}`](/vault/api-docs/secret/kv/kv-v2#create-update-metadata)
with the JSON data file:
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data @metadata.json \
${VAULT_ADDR}/v1/<plugin_mount_path>/metadata/<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```json
{
"custom_metadata": {
"use": "API keys for different dev environments",
"renew-date": "2026-11-14"
}
}
```
</CodeBlockConfig>
<CodeBlockConfig hideClipboard="true">
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data @metadata.json \
${VAULT_ADDR}/v1/shared/metadata/dev/square-api
```
`/{plugin_mount_path}/metadata/{secret_path}` does not return data on success.
</CodeBlockConfig>
</Tab>
</Tabs> | vault | layout docs page title Write custom metadata description Write custom metadata fields to your kv v2 plugin Write custom metadata in key value v2 Write custom metadata to a kv v2 secret path Tip title Assumptions You have set up a kv v2 plugin vault docs secrets kv kv v2 setup Your authentication token has create and update permissions for the kv v2 plugin Tip Tabs Tab heading CLI group cli Use vault kv metadata put vault docs command kv metadata to set custom metadata fields for a kv mount path Repeat the custom metadata flag for each key value metadata entry shell session vault kv metadata put custom metadata key value pair mount mount path secret path For example CodeBlockConfig hideClipboard true shell session vault kv metadata put custom metadata use API keys for different dev environments custom metadata renew date 2026 11 14 mount shared dev square api Success Data written to shared metadata dev square api CodeBlockConfig The custom metadata metadata field now includes a map with the two custom fields CodeBlockConfig hideClipboard true highlight 14 shell session vault kv metadata get mount shared dev square api Metadata Path shared metadata dev square api Metadata Key Value cas required false created time 2024 11 13T21 51 50 898782695Z current version 9 custom metadata map use API keys for different dev environments renew date 2026 11 14 delete version after 0s max versions 10 oldest version 4 updated time 2024 11 15T03 10 26 749233814Z Version 1 Key Value created time 2024 11 13T21 51 50 898782695Z deletion time n a destroyed false CodeBlockConfig Tab Tab heading GUI group gui include gui instructions plugins kv open overview mdx Select the Metadata tab Click Edit metadata Set a new key name and value under Custom metadata Use the Add button to set additional key value pairs Click Update Partial screenshot of the Vault GUI showing the Edit Secret Metadata screen img gui kv custom metadata png Tab Tab heading API group api 1 Create a JSON file with the metadata you want to write to the your kv v2 plugin Use the custom metadata field to define the custom metadata fields and initial values 1 Make a POST call to plugin mount path metadata secret path vault api docs secret kv kv v2 create update metadata with the JSON data file shell session curl request POST header X Vault Token VAULT TOKEN data metadata json VAULT ADDR v1 plugin mount path metadata secret path For example CodeBlockConfig hideClipboard true json custom metadata use API keys for different dev environments renew date 2026 11 14 CodeBlockConfig CodeBlockConfig hideClipboard true shell session curl request POST header X Vault Token VAULT TOKEN data metadata json VAULT ADDR v1 shared metadata dev square api plugin mount path metadata secret path does not return data on success CodeBlockConfig Tab Tabs |
vault Patch versioned key value data layout docs page title Patch data Use the patch process to update specific values or add new key value pairs to Make partial updates or add new keys to versioned data in the kv v2 plugin | ---
layout: docs
page_title: Patch data
description: >-
Make partial updates or add new keys to versioned data in the kv v2 plugin
---
# Patch versioned key/value data
Use the patch process to update specific values or add new key/value pairs to
an existing data path in the `kv` v2 plugin.
<Tip title="Assumptions">
- You have [set up a `kv` v2 plugin](/vault/docs/secrets/kv/kv-v2/setup).
- Your authentication token has appropriate permissions for the `kv` v2 plugin:
- **`patch`** permission to make direct updates with `PATCH` actions.
- **`create`**+**`update`** permission if you want to make indirect
updates with the Vault CLI by combining `GET` and `POST` actions.
- You know the keys or [subkeys](/vault/docs/secrets/kv/kv-v2/cookbook/read-subkey)
you want to patch.
</Tip>
<Tabs>
<Tab heading="CLI" group="cli">
Use the [`vault kv patch`](/vault/docs/commands/kv/patch) command and set the
`-cas` flag to the expected data version to perform a check-and-set operation
before applying the patch:
```shell-session
$ vault kv patch \
-cas <target_version> \
-max-versions <max_versions> \
-mount <mount_path> \
<secret_path> \
<key_name>=<key_value>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault kv patch \
-cas 2 \
-mount shared \
dev/square-api \
prod=5678
======= Secret Path =======
shared/data/dev/square-api
======= Metadata =======
Key Value
--- -----
created_time 2024-11-13T21:52:10.326204209Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 2
```
</CodeBlockConfig>
If the `-cas` version is older than the current version of data at the target
path, the patch fails:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault kv patch -cas 1 -mount shared dev/square-api prod=5678
Error writing data to shared/data/dev/square-api: Error making API request.
URL: PATCH http://192.168.0.1:8200/v1/shared/data/dev/square-api
Code: 400. Errors:
* check-and-set parameter did not match the current version
```
</CodeBlockConfig>
To **force** a patch, you can exclude the `-cas` flag **or** use the
`read+write` patch method with the `-method` flag. For example:
```shell-session
$ vault kv patch -method rw -mount shared dev/square-api prod=5678
======= Secret Path =======
shared/data/dev/square-api
======= Metadata =======
Key Value
--- -----
created_time 2024-11-13T21:58:32.128442898Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 3
```
Instead of using an HTTP `PATCH` action, the `read+write` method uses a sequence
of `GET` and `POST` operations to fetch the most recent version of data stored
at the targeted path, perform an in-memory update to the targeted keys, then
push the update to the plugin.
</Tab>
<Tab heading="GUI" group="gui">
@include 'alerts/enterprise-only.mdx'
@include 'gui-instructions/plugins/kv/open-overview.mdx'
- Select the **Secret** tab.
- Click **Patch latest version +**.
- Edit the values you want to update.
- Click **Save**.

</Tab>
<Tab heading="API" group="api">
1. Create a JSON file with the key/value data you want to patch. Use the
`options` field to set optional flags and `data` to define the key/value pairs
you want to update.
1. Make a `PATCH` call to
[`/{plugin_mount_path}/data/{secret_path}`](/vault/api-docs/secret/kv/kv-v2#patch-secret)
with the JSON data file and the `Content-Type` header set to
`application/merge-patch+json`:
```shell-session
$ curl \
--request PATCH \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--header "Content-Type: application/merge-patch+json" \
--data @data.json \
${VAULT_ADDR}/v1/<plugin_mount_path>/data/<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```json
{
"options": {
"cas": 4
},
"data": {
"smoke": "efgh"
}
}
```
</CodeBlockConfig>
<CodeBlockConfig hideClipboard="true">
```shell-session
$ curl \
--request PATCH \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--header "Content-Type: application/merge-patch+json" \
--data @data.json \
${VAULT_ADDR}/v1/shared/data/dev/square-api | jq
{
"request_id": "6f3bae46-6444-adeb-372a-7f100b4117f9",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": {
"created_time": "2024-11-15T02:52:24.287700164Z",
"custom_metadata": null,
"deletion_time": "",
"destroyed": false,
"version": 5
},
"wrap_info": null,
"warnings": null,
"auth": null,
"mount_type": "kv"
}
```
</CodeBlockConfig>
</Tab>
</Tabs> | vault | layout docs page title Patch data description Make partial updates or add new keys to versioned data in the kv v2 plugin Patch versioned key value data Use the patch process to update specific values or add new key value pairs to an existing data path in the kv v2 plugin Tip title Assumptions You have set up a kv v2 plugin vault docs secrets kv kv v2 setup Your authentication token has appropriate permissions for the kv v2 plugin patch permission to make direct updates with PATCH actions create update permission if you want to make indirect updates with the Vault CLI by combining GET and POST actions You know the keys or subkeys vault docs secrets kv kv v2 cookbook read subkey you want to patch Tip Tabs Tab heading CLI group cli Use the vault kv patch vault docs commands kv patch command and set the cas flag to the expected data version to perform a check and set operation before applying the patch shell session vault kv patch cas target version max versions max versions mount mount path secret path key name key value For example CodeBlockConfig hideClipboard true shell session vault kv patch cas 2 mount shared dev square api prod 5678 Secret Path shared data dev square api Metadata Key Value created time 2024 11 13T21 52 10 326204209Z custom metadata nil deletion time n a destroyed false version 2 CodeBlockConfig If the cas version is older than the current version of data at the target path the patch fails CodeBlockConfig hideClipboard true shell session vault kv patch cas 1 mount shared dev square api prod 5678 Error writing data to shared data dev square api Error making API request URL PATCH http 192 168 0 1 8200 v1 shared data dev square api Code 400 Errors check and set parameter did not match the current version CodeBlockConfig To force a patch you can exclude the cas flag or use the read write patch method with the method flag For example shell session vault kv patch method rw mount shared dev square api prod 5678 Secret Path shared data dev square api Metadata Key Value created time 2024 11 13T21 58 32 128442898Z custom metadata nil deletion time n a destroyed false version 3 Instead of using an HTTP PATCH action the read write method uses a sequence of GET and POST operations to fetch the most recent version of data stored at the targeted path perform an in memory update to the targeted keys then push the update to the plugin Tab Tab heading GUI group gui include alerts enterprise only mdx include gui instructions plugins kv open overview mdx Select the Secret tab Click Patch latest version Edit the values you want to update Click Save Partial screenshot of the Vault GUI showing two editable key value pairs at the path dev square api img gui kv patch data png Tab Tab heading API group api 1 Create a JSON file with the key value data you want to patch Use the options field to set optional flags and data to define the key value pairs you want to update 1 Make a PATCH call to plugin mount path data secret path vault api docs secret kv kv v2 patch secret with the JSON data file and the Content Type header set to application merge patch json shell session curl request PATCH header X Vault Token VAULT TOKEN header Content Type application merge patch json data data json VAULT ADDR v1 plugin mount path data secret path For example CodeBlockConfig hideClipboard true json options cas 4 data smoke efgh CodeBlockConfig CodeBlockConfig hideClipboard true shell session curl request PATCH header X Vault Token VAULT TOKEN header Content Type application merge patch json data data json VAULT ADDR v1 shared data dev square api jq request id 6f3bae46 6444 adeb 372a 7f100b4117f9 lease id renewable false lease duration 0 data created time 2024 11 15T02 52 24 287700164Z custom metadata null deletion time destroyed false version 5 wrap info null warnings null auth null mount type kv CodeBlockConfig Tab Tabs |
vault Write new versions of data to a new or existing data path in the kv v2 plugin Write new key value data layout docs Write new versioned data to the kv v2 plugin page title Write new data | ---
layout: docs
page_title: Write new data
description: >-
Write new versioned data to the kv v2 plugin
---
# Write new key/value data
Write new versions of data to a new or existing data path in the `kv` v2 plugin.
<Tip title="Assumptions">
- You have [set up a `kv` v2 plugin](/vault/docs/secrets/kv/kv-v2/setup).
- Your authentication token has `create` and `update` permissions for the `kv`
v2 plugin.
</Tip>
<Tabs>
<Tab heading="CLI" group="cli">
<Note>
The Vault CLI forcibly converts `kv` keys and values data to strings before
writing data. To preserve non-string data, write your key/value pairs to Vault
from a JSON file or use the plugin API.
</Note>
Use [`vault kv put`](/vault/docs/command/kv/put) to save a new version of
key/value data to an new or existing secret path:
```shell-session
$ vault kv put \
-mount <mount_path> \
<secret_path> \
<list_of_kv_values>
```
For example:
<CodeBlockConfig hideClipboard="true">
```shell-session
$ vault kv put \
-mount shared \
dev/square-api \
sandbox=1234 prod=5679 smoke=abcd
======= Secret Path =======
shared/data/dev/square-api
======= Metadata =======
Key Value
--- -----
created_time 2024-11-15T01:52:23.434633061Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 5
```
</CodeBlockConfig>
</Tab>
<Tab heading="GUI" group="gui">
The Vault GUI forcibly converts non-string keys to strings before writing data.
To preserve non-string values, use the JSON toggle to write your key/value data
as JSON.
@include 'gui-instructions/plugins/kv/open-overview.mdx'
- Click **Create new +** from one of the following tabs:
- **Overview** tab: in the "Current version" card.
- **Secret** tab: in the toolbar.
- Set a new key name and value.
- Use the **Add** button to set additional key/value pairs.
- Click **Save** to write the new version data.

</Tab>
<Tab heading="API" group="api">
1. Create a JSON file with the key/value data you want to write to Vault. Use
the `options` field to set optional flags and `data` to define the key/value
pairs.
1. Make a `POST` call to
[`/{plugin_mount_path}/data/{secret_path}`](/vault/api-docs/secret/kv/kv-v2#create-update-secret)
with the JSON data:
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data @data.json \
${VAULT_ADDR}/v1/<plugin_mount_path>/data/<secret_path>
```
For example:
<CodeBlockConfig hideClipboard="true">
```json
{
"options": {
"cas": 4
},
"data": {
"sandbox": "1234",
"prod": "5679",
"smoke": "abcd"
}
}
```
</CodeBlockConfig>
<CodeBlockConfig hideClipboard="true">
```shell-session
$ curl \
--request POST \
--header "X-Vault-Token: ${VAULT_TOKEN}" \
--data @data.json \
${VAULT_ADDR}/v1/shared/data/dev/square-api | jq
{
"request_id": "0c872d86-0def-4261-34d9-b796039ec02f",
"lease_id": "",
"renewable": false,
"lease_duration": 0,
"data": {
"created_time": "2024-11-15T02:41:02.556301319Z",
"custom_metadata": null,
"deletion_time": "",
"destroyed": false,
"version": 5
},
"wrap_info": null,
"warnings": null,
"auth": null,
"mount_type": "kv"
}
```
</CodeBlockConfig>
</Tab>
</Tabs> | vault | layout docs page title Write new data description Write new versioned data to the kv v2 plugin Write new key value data Write new versions of data to a new or existing data path in the kv v2 plugin Tip title Assumptions You have set up a kv v2 plugin vault docs secrets kv kv v2 setup Your authentication token has create and update permissions for the kv v2 plugin Tip Tabs Tab heading CLI group cli Note The Vault CLI forcibly converts kv keys and values data to strings before writing data To preserve non string data write your key value pairs to Vault from a JSON file or use the plugin API Note Use vault kv put vault docs command kv put to save a new version of key value data to an new or existing secret path shell session vault kv put mount mount path secret path list of kv values For example CodeBlockConfig hideClipboard true shell session vault kv put mount shared dev square api sandbox 1234 prod 5679 smoke abcd Secret Path shared data dev square api Metadata Key Value created time 2024 11 15T01 52 23 434633061Z custom metadata nil deletion time n a destroyed false version 5 CodeBlockConfig Tab Tab heading GUI group gui The Vault GUI forcibly converts non string keys to strings before writing data To preserve non string values use the JSON toggle to write your key value data as JSON include gui instructions plugins kv open overview mdx Click Create new from one of the following tabs Overview tab in the Current version card Secret tab in the toolbar Set a new key name and value Use the Add button to set additional key value pairs Click Save to write the new version data Partial screenshot of the Vault GUI showing the Create New Version screen img gui kv write data png Tab Tab heading API group api 1 Create a JSON file with the key value data you want to write to Vault Use the options field to set optional flags and data to define the key value pairs 1 Make a POST call to plugin mount path data secret path vault api docs secret kv kv v2 create update secret with the JSON data shell session curl request POST header X Vault Token VAULT TOKEN data data json VAULT ADDR v1 plugin mount path data secret path For example CodeBlockConfig hideClipboard true json options cas 4 data sandbox 1234 prod 5679 smoke abcd CodeBlockConfig CodeBlockConfig hideClipboard true shell session curl request POST header X Vault Token VAULT TOKEN data data json VAULT ADDR v1 shared data dev square api jq request id 0c872d86 0def 4261 34d9 b796039ec02f lease id renewable false lease duration 0 data created time 2024 11 15T02 41 02 556301319Z custom metadata null deletion time destroyed false version 5 wrap info null warnings null auth null mount type kv CodeBlockConfig Tab Tabs |
vault Active directory secrets engine include ad secrets deprecation mdx The Active Directory secrets engine allowing Vault to generate dynamic credentials page title Active Directory Secrets Engines layout docs | ---
layout: docs
page_title: Active Directory - Secrets Engines
description: >-
The Active Directory secrets engine allowing Vault to generate dynamic credentials.
---
# Active directory secrets engine
@include 'ad-secrets-deprecation.mdx'
@include 'x509-sha1-deprecation.mdx'
The Active Directory (AD) secrets engine is a plugin residing [here](https://github.com/hashicorp/vault-plugin-secrets-active-directory).
It has two main features.
The first feature (password rotation) is where the AD secrets engine rotates AD passwords dynamically.
This is designed for a high-load environment where many instances may be accessing
a shared password simultaneously. With a simple set up and a simple creds API,
it doesn't require instances to be manually registered in advance to gain access.
As long as access has been granted to the creds path via a method like
[AppRole](/vault/api-docs/auth/approle), they're available. Passwords are
lazily rotated based on preset TTLs and can have a length configured to meet your needs. Additionally,
passwords can be manually rotated using the [rotate-role](/vault/api-docs/secret/ad#rotate-role-credentials) endpoint.
The second feature (service account check-out) is where a library of service accounts can
be checked out by a person or by machines. Vault will automatically rotate the password
each time a service account is checked in. Service accounts can be voluntarily checked in, or Vault
will check them in when their lending period (or, "ttl", in Vault's language) ends.
## Password rotation
### Customizing password generation
There are two ways of customizing how passwords are generated in the Active Directory secret engine:
1. [Password Policies](/vault/docs/concepts/password-policies)
2. `length` and `formatter` fields within the [configuration](/vault/api-docs/secret/ad#password-parameters)
Utilizing password policies is the recommended path as the `length` and `formatter` fields have
been deprecated in favor of password policies. The `password_policy` field within the configuration
cannot be specified alongside either `length` or `formatter` to prevent a confusing configuration.
### A note on lazy rotation
To drive home the point that passwords are rotated "lazily", consider this scenario:
- A password is configured with a TTL of 1 hour.
- All instances of a service using this password are off for 12 hours.
- Then they wake up and again request the password.
In this scenario, although the password TTL was set to 1 hour, the password wouldn't be rotated for 12 hours when it
was next requested. "Lazy" rotation means passwords are rotated when all of the following conditions are true:
- They are over their TTL
- They are requested
Therefore, the AD TTL can be considered a soft contract. It's fulfilled when the given password is next requested.
To ensure your passwords are rotated as expected, we'd recommend you configure services to request each password at least
twice as often as its TTL.
### A note on escaping
**It is up to the administrator** to provide properly escaped DNs. This
includes the user DN, bind DN for search, and so on.
The only DN escaping performed by this method is on usernames given at login
time when they are inserted into the final bind DN, and uses escaping rules
defined in RFC 4514.
Additionally, Active Directory has escaping rules that differ slightly from the
RFC; in particular it requires escaping of '#' regardless of position in the DN
(the RFC only requires it to be escaped when it is the first character), and
'=', which the RFC indicates can be escaped with a backslash, but does not
contain in its set of required escapes. If you are using Active Directory and
these appear in your usernames, please ensure that they are escaped, in
addition to being properly escaped in your configured DNs.
For reference, see [RFC 4514](https://www.ietf.org/rfc/rfc4514.txt) and this
[TechNet post on characters to escape in Active
Directory](http://social.technet.microsoft.com/wiki/contents/articles/5312.active-directory-characters-to-escape.aspx).
### Quick setup
Most secrets engines must be configured in advance before they can perform their
functions. These steps are usually completed by an operator or configuration
management tool.
1. Enable the Active Directory secrets engine:
```text
$ vault secrets enable ad
Success! Enabled the ad secrets engine at: ad/
```
By default, the secrets engine will mount at the name of the engine. To
enable the secrets engine at a different path, use the `-path` argument.
2. Configure the credentials that Vault uses to communicate with Active Directory
to generate passwords:
```text
$ vault write ad/config \
binddn=$USERNAME \
bindpass=$PASSWORD \
url=ldaps://138.91.247.105 \
userdn='dc=example,dc=com'
```
The `$USERNAME` and `$PASSWORD` given must have access to modify passwords
for the given account. It is possible to delegate access to change
passwords for these accounts to the one Vault is in control of, and this is
usually the highest-security solution.
If you'd like to do a quick, insecure evaluation, also set `insecure_tls` to true. However, this is NOT RECOMMENDED
in a production environment. In production, we recommend `insecure_tls` is false (its default) and is used with a valid
`certificate`.
3. Configure a role that maps a name in Vault to an account in Active Directory.
When applications request passwords, password rotation settings will be managed by
this role.
```text
$ vault write ad/roles/my-application \
service_account_name="[email protected]"
```
4. Grant "my-application" access to its creds at `ad/creds/my-application` using an
auth method like [AppRole](/vault/api-docs/auth/approle).
### FAQ
#### What if someone directly rotates an active directory password that Vault is managing?
If an administrator at your company rotates a password that Vault is managing,
the next time an application asks _Vault_ for that password, Vault won't know
it.
To maintain that application's up-time, Vault will need to return to a state of
knowing the password. Vault will generate a new password, update it, and return
it to the application(s) asking for it. This all occurs automatically, without
human intervention.
Thus, we wouldn't recommend that administrators directly rotate the passwords
for accounts that Vault is managing. This may lead to behavior the
administrator wouldn't expect, like finding very quickly afterwards that their
new password has already been changed.
The password `ttl` on a role can be updated at any time to ensure that the
responsibility of updating passwords can be left to Vault, rather than
requiring manual administrator updates.
#### Why does Vault return the last password in addition to the current one?
Active Directory promises _eventual consistency_, which means that new
passwords may not be propagated to all instances immediately. To deal with
this, Vault returns the current password with the last password if it's known.
That way, if a new password isn't fully operational, the last password can also
be used.
## Service account Check-Out
Vault offers the ability to check service accounts in and out. This is a separate,
different set of functionality from the password rotation feature above. Let's walk
through how to use it, with explanation at each step.
First we'll need to enable the AD secrets engine and tell it how to talk to our AD
server just as we did above.
```shell-session
$ vault secrets enable ad
Success! Enabled the ad secrets engine at: ad/
$ vault write ad/config \
binddn=$USERNAME \
bindpass=$PASSWORD \
url=ldaps://138.91.247.105 \
userdn='dc=example,dc=com'
```
Our next step is to designate a set of service accounts for check-out.
```shell-session
$ vault write ad/library/accounting-team \
[email protected],[email protected] \
ttl=10h \
max_ttl=20h \
disable_check_in_enforcement=false
```
In this example, the service account names of `[email protected]` and `[email protected]` have
already been created on the remote AD server. They've been set aside solely for Vault to handle.
The `ttl` is how long each check-out will last before Vault checks in a service account,
rotating its password during check-in. The `max_ttl` is the maximum amount of time it can live
if it's renewed. These default to `24h`, and both use [duration format strings](/vault/docs/concepts/duration-format).
Also by default, a service account must be checked in by the same Vault entity or client token that
checked it out. However, if this behavior causes problems, set `disable_check_in_enforcement=true`.
When a library of service accounts has been created, view their status at any time to see if they're
available or checked out.
```shell-session
$ vault read ad/library/accounting-team/status
Key Value
--- -----
[email protected] map[available:true]
[email protected] map[available:true]
```
To check out any service account that's available, simply execute:
```shell-session
$ vault write -f ad/library/accounting-team/check-out
Key Value
--- -----
lease_id ad/library/accounting-team/check-out/EpuS8cX7uEsDzOwW9kkKOyGW
lease_duration 10h
lease_renewable true
password ?@09AZKh03hBORZPJcTDgLfntlHqxLy29tcQjPVThzuwWAx/Twx4a2ZcRQRqrZ1w
service_account_name [email protected]
```
If the default `ttl` for the check-out is higher than needed, set the check-out to last
for a shorter time by using:
```shell-session
$ vault write ad/library/accounting-team/check-out ttl=30m
Key Value
--- -----
lease_id ad/library/accounting-team/check-out/gMonJ2jB6kYs6d3Vw37WFDCY
lease_duration 30m
lease_renewable true
password ?@09AZerLLuJfEMbRqP+3yfQYDSq6laP48TCJRBJaJu/kDKLsq9WxL9szVAvL/E1
service_account_name [email protected]
```
This can be a nice way to say, "Although I _can_ have a check-out for 24 hours, if I
haven't checked it in after 30 minutes, I forgot or I'm a dead instance, so you can just
check it back in."
If no service accounts are available for check-out, Vault will return a 400 Bad Request.
```shell-session
$ vault write -f ad/library/accounting-team/check-out
Error writing data to ad/library/accounting-team/check-out: Error making API request.
URL: POST http://localhost:8200/v1/ad/library/accounting-team/check-out
Code: 400. Errors:
* No service accounts available for check-out.
```
To extend a check-out, renew its lease.
```shell-session
$ vault lease renew ad/library/accounting-team/check-out/0C2wmeaDmsToVFc0zDiX9cMq
Key Value
--- -----
lease_id ad/library/accounting-team/check-out/0C2wmeaDmsToVFc0zDiX9cMq
lease_duration 10h
lease_renewable true
```
Renewing a check-out means its current password will live longer, since passwords are rotated
anytime a password is _checked in_ either by a caller, or by Vault because the check-out `ttl`
ends.
To check a service account back in for others to use, call:
```shell-session
$ vault write -f ad/library/accounting-team/check-in
Key Value
--- -----
check_ins [[email protected]]
```
Most of the time this will just work, but if multiple service accounts checked out by the same
caller, Vault will need to know which one(s) to check in.
```shell-session
$ vault write ad/library/accounting-team/check-in [email protected]
Key Value
--- -----
check_ins [[email protected]]
```
To perform a check-in, Vault verifies that the caller _should_ be able to check in a given service account.
To do this, Vault looks for either the same [entity ID](/vault/tutorials/auth-methods/identity)
used to check out the service account, or the same client token.
If a caller is unable to check in a service account, or simply doesn't try,
Vault will check it back in automatically when the `ttl` expires. However, if that is too long,
service accounts can be forcibly checked in by a highly privileged user through:
```shell-session
$ vault write -f ad/library/manage/accounting-team/check-in
Key Value
--- -----
check_ins [[email protected]]
```
Or, alternatively, revoking the secret's lease has the same effect.
```shell-session
$ vault lease revoke ad/library/accounting-team/check-out/PvBVG0m7pEg2940Cb3Jw3KpJ
All revocation operations queued successfully!
```
### Troubleshooting
#### Old passwords are still valid for a period of time.
During testing, we found that by default, many versions of Active Directory
perpetuate old passwords for a short while. After we discovered this behavior,
we found articles discussing it by searching for "AD password caching" and "OldPasswordAllowedPeriod". We
also found [an article from Microsoft](https://support.microsoft.com/en-us/help/906305/new-setting-modifies-ntlm-network-authentication-behavior)
discussing how to configure this behavior. This behavior appears to vary by AD
version. We recommend you test the behavior of your particular AD server,
and edit its settings to gain the desired behavior.
#### I get a lot of 400 bad request's when trying to check out service accounts.
This will occur when there aren't enough service accounts for those requesting them. Let's
suppose our "accounting-team" service accounts are the ones being requested. When Vault
receives a check-out call but none are available, Vault will log at debug level:
"'accounting-team' had no check-outs available". Vault will also increment a metric
containing the strings "active directory", "check-out", "unavailable", and "accounting-team".
Once it's known _which_ library needs more service accounts for checkout, fix this issue
by merely creating a new service account for it to use in Active Directory, then adding it to
Vault like so:
```shell-session
$ vault write ad/library/accounting-team \
[email protected],[email protected],[email protected]
```
In this example, fizz and buzz were pre-existing but were still included in the call
because we'd like them to exist in the resulting set. The new account was appended to
the end.
#### Sometimes Vault gives me a password but then AD says it's not valid.
Active Directory is eventually consistent, meaning that it can take some time for word
of a new password to travel across all AD instances in a cluster. In larger clusters, we
have observed the password taking over 10 seconds to propagate fully. The simplest way to
handle this is to simply wait and retry using the new password.
#### When trying to read credentials i get 'LDAP result code 53 "Unwilling to perform"'
Active Directory will only support password changes over a secure connection. Ensure that your configuration block is not using an unsecured LDAP connection.
## Tutorial
Refer to the [Active Directory Service Account Check-out](/vault/tutorials/secrets-management/active-directory) tutorial to learn how to enable a team to share a select set of service accounts.
## API
The Active Directory secrets engine has a full HTTP API. Please see the
[Active Directory secrets engine API](/vault/api-docs/secret/ad) for more
details. | vault | layout docs page title Active Directory Secrets Engines description The Active Directory secrets engine allowing Vault to generate dynamic credentials Active directory secrets engine include ad secrets deprecation mdx include x509 sha1 deprecation mdx The Active Directory AD secrets engine is a plugin residing here https github com hashicorp vault plugin secrets active directory It has two main features The first feature password rotation is where the AD secrets engine rotates AD passwords dynamically This is designed for a high load environment where many instances may be accessing a shared password simultaneously With a simple set up and a simple creds API it doesn t require instances to be manually registered in advance to gain access As long as access has been granted to the creds path via a method like AppRole vault api docs auth approle they re available Passwords are lazily rotated based on preset TTLs and can have a length configured to meet your needs Additionally passwords can be manually rotated using the rotate role vault api docs secret ad rotate role credentials endpoint The second feature service account check out is where a library of service accounts can be checked out by a person or by machines Vault will automatically rotate the password each time a service account is checked in Service accounts can be voluntarily checked in or Vault will check them in when their lending period or ttl in Vault s language ends Password rotation Customizing password generation There are two ways of customizing how passwords are generated in the Active Directory secret engine 1 Password Policies vault docs concepts password policies 2 length and formatter fields within the configuration vault api docs secret ad password parameters Utilizing password policies is the recommended path as the length and formatter fields have been deprecated in favor of password policies The password policy field within the configuration cannot be specified alongside either length or formatter to prevent a confusing configuration A note on lazy rotation To drive home the point that passwords are rotated lazily consider this scenario A password is configured with a TTL of 1 hour All instances of a service using this password are off for 12 hours Then they wake up and again request the password In this scenario although the password TTL was set to 1 hour the password wouldn t be rotated for 12 hours when it was next requested Lazy rotation means passwords are rotated when all of the following conditions are true They are over their TTL They are requested Therefore the AD TTL can be considered a soft contract It s fulfilled when the given password is next requested To ensure your passwords are rotated as expected we d recommend you configure services to request each password at least twice as often as its TTL A note on escaping It is up to the administrator to provide properly escaped DNs This includes the user DN bind DN for search and so on The only DN escaping performed by this method is on usernames given at login time when they are inserted into the final bind DN and uses escaping rules defined in RFC 4514 Additionally Active Directory has escaping rules that differ slightly from the RFC in particular it requires escaping of regardless of position in the DN the RFC only requires it to be escaped when it is the first character and which the RFC indicates can be escaped with a backslash but does not contain in its set of required escapes If you are using Active Directory and these appear in your usernames please ensure that they are escaped in addition to being properly escaped in your configured DNs For reference see RFC 4514 https www ietf org rfc rfc4514 txt and this TechNet post on characters to escape in Active Directory http social technet microsoft com wiki contents articles 5312 active directory characters to escape aspx Quick setup Most secrets engines must be configured in advance before they can perform their functions These steps are usually completed by an operator or configuration management tool 1 Enable the Active Directory secrets engine text vault secrets enable ad Success Enabled the ad secrets engine at ad By default the secrets engine will mount at the name of the engine To enable the secrets engine at a different path use the path argument 2 Configure the credentials that Vault uses to communicate with Active Directory to generate passwords text vault write ad config binddn USERNAME bindpass PASSWORD url ldaps 138 91 247 105 userdn dc example dc com The USERNAME and PASSWORD given must have access to modify passwords for the given account It is possible to delegate access to change passwords for these accounts to the one Vault is in control of and this is usually the highest security solution If you d like to do a quick insecure evaluation also set insecure tls to true However this is NOT RECOMMENDED in a production environment In production we recommend insecure tls is false its default and is used with a valid certificate 3 Configure a role that maps a name in Vault to an account in Active Directory When applications request passwords password rotation settings will be managed by this role text vault write ad roles my application service account name my application example com 4 Grant my application access to its creds at ad creds my application using an auth method like AppRole vault api docs auth approle FAQ What if someone directly rotates an active directory password that Vault is managing If an administrator at your company rotates a password that Vault is managing the next time an application asks Vault for that password Vault won t know it To maintain that application s up time Vault will need to return to a state of knowing the password Vault will generate a new password update it and return it to the application s asking for it This all occurs automatically without human intervention Thus we wouldn t recommend that administrators directly rotate the passwords for accounts that Vault is managing This may lead to behavior the administrator wouldn t expect like finding very quickly afterwards that their new password has already been changed The password ttl on a role can be updated at any time to ensure that the responsibility of updating passwords can be left to Vault rather than requiring manual administrator updates Why does Vault return the last password in addition to the current one Active Directory promises eventual consistency which means that new passwords may not be propagated to all instances immediately To deal with this Vault returns the current password with the last password if it s known That way if a new password isn t fully operational the last password can also be used Service account Check Out Vault offers the ability to check service accounts in and out This is a separate different set of functionality from the password rotation feature above Let s walk through how to use it with explanation at each step First we ll need to enable the AD secrets engine and tell it how to talk to our AD server just as we did above shell session vault secrets enable ad Success Enabled the ad secrets engine at ad vault write ad config binddn USERNAME bindpass PASSWORD url ldaps 138 91 247 105 userdn dc example dc com Our next step is to designate a set of service accounts for check out shell session vault write ad library accounting team service account names fizz example com buzz example com ttl 10h max ttl 20h disable check in enforcement false In this example the service account names of fizz example com and buzz example com have already been created on the remote AD server They ve been set aside solely for Vault to handle The ttl is how long each check out will last before Vault checks in a service account rotating its password during check in The max ttl is the maximum amount of time it can live if it s renewed These default to 24h and both use duration format strings vault docs concepts duration format Also by default a service account must be checked in by the same Vault entity or client token that checked it out However if this behavior causes problems set disable check in enforcement true When a library of service accounts has been created view their status at any time to see if they re available or checked out shell session vault read ad library accounting team status Key Value buzz example com map available true fizz example com map available true To check out any service account that s available simply execute shell session vault write f ad library accounting team check out Key Value lease id ad library accounting team check out EpuS8cX7uEsDzOwW9kkKOyGW lease duration 10h lease renewable true password 09AZKh03hBORZPJcTDgLfntlHqxLy29tcQjPVThzuwWAx Twx4a2ZcRQRqrZ1w service account name fizz example com If the default ttl for the check out is higher than needed set the check out to last for a shorter time by using shell session vault write ad library accounting team check out ttl 30m Key Value lease id ad library accounting team check out gMonJ2jB6kYs6d3Vw37WFDCY lease duration 30m lease renewable true password 09AZerLLuJfEMbRqP 3yfQYDSq6laP48TCJRBJaJu kDKLsq9WxL9szVAvL E1 service account name buzz example com This can be a nice way to say Although I can have a check out for 24 hours if I haven t checked it in after 30 minutes I forgot or I m a dead instance so you can just check it back in If no service accounts are available for check out Vault will return a 400 Bad Request shell session vault write f ad library accounting team check out Error writing data to ad library accounting team check out Error making API request URL POST http localhost 8200 v1 ad library accounting team check out Code 400 Errors No service accounts available for check out To extend a check out renew its lease shell session vault lease renew ad library accounting team check out 0C2wmeaDmsToVFc0zDiX9cMq Key Value lease id ad library accounting team check out 0C2wmeaDmsToVFc0zDiX9cMq lease duration 10h lease renewable true Renewing a check out means its current password will live longer since passwords are rotated anytime a password is checked in either by a caller or by Vault because the check out ttl ends To check a service account back in for others to use call shell session vault write f ad library accounting team check in Key Value check ins fizz example com Most of the time this will just work but if multiple service accounts checked out by the same caller Vault will need to know which one s to check in shell session vault write ad library accounting team check in service account names fizz example com Key Value check ins fizz example com To perform a check in Vault verifies that the caller should be able to check in a given service account To do this Vault looks for either the same entity ID vault tutorials auth methods identity used to check out the service account or the same client token If a caller is unable to check in a service account or simply doesn t try Vault will check it back in automatically when the ttl expires However if that is too long service accounts can be forcibly checked in by a highly privileged user through shell session vault write f ad library manage accounting team check in Key Value check ins fizz example com Or alternatively revoking the secret s lease has the same effect shell session vault lease revoke ad library accounting team check out PvBVG0m7pEg2940Cb3Jw3KpJ All revocation operations queued successfully Troubleshooting Old passwords are still valid for a period of time During testing we found that by default many versions of Active Directory perpetuate old passwords for a short while After we discovered this behavior we found articles discussing it by searching for AD password caching and OldPasswordAllowedPeriod We also found an article from Microsoft https support microsoft com en us help 906305 new setting modifies ntlm network authentication behavior discussing how to configure this behavior This behavior appears to vary by AD version We recommend you test the behavior of your particular AD server and edit its settings to gain the desired behavior I get a lot of 400 bad request s when trying to check out service accounts This will occur when there aren t enough service accounts for those requesting them Let s suppose our accounting team service accounts are the ones being requested When Vault receives a check out call but none are available Vault will log at debug level accounting team had no check outs available Vault will also increment a metric containing the strings active directory check out unavailable and accounting team Once it s known which library needs more service accounts for checkout fix this issue by merely creating a new service account for it to use in Active Directory then adding it to Vault like so shell session vault write ad library accounting team service account names fizz example com buzz example com new example com In this example fizz and buzz were pre existing but were still included in the call because we d like them to exist in the resulting set The new account was appended to the end Sometimes Vault gives me a password but then AD says it s not valid Active Directory is eventually consistent meaning that it can take some time for word of a new password to travel across all AD instances in a cluster In larger clusters we have observed the password taking over 10 seconds to propagate fully The simplest way to handle this is to simply wait and retry using the new password When trying to read credentials i get LDAP result code 53 Unwilling to perform Active Directory will only support password changes over a secure connection Ensure that your configuration block is not using an unsecured LDAP connection Tutorial Refer to the Active Directory Service Account Check out vault tutorials secrets management active directory tutorial to learn how to enable a team to share a select set of service accounts API The Active Directory secrets engine has a full HTTP API Please see the Active Directory secrets engine API vault api docs secret ad for more details |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.