title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Tuning performance in Identity Management | Tuning performance in Identity Management Red Hat Enterprise Linux 9 Optimizing the IdM services, such as Directory Server, KDC, and SSSD, for better performance Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/tuning_performance_in_identity_management/index |
Chapter 8. Creating infrastructure machine sets | Chapter 8. Creating infrastructure machine sets Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Red Hat OpenShift Service Mesh deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. 8.1. OpenShift Container Platform infrastructure components Each self-managed Red Hat OpenShift subscription includes entitlements for OpenShift Container Platform and other OpenShift-related components. These entitlements are included for running OpenShift Container Platform control plane and infrastructure workloads and do not need to be accounted for during sizing. To qualify as an infrastructure node and use the included entitlement, only components that are supporting the cluster, and not part of an end-user application, can run on those instances. Examples include the following components: Kubernetes and OpenShift Container Platform control plane services The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Red Hat Quay Red Hat OpenShift Data Foundation Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Red Hat OpenShift Service Mesh Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. To create an infrastructure node, you can use a machine set , label the node , or use a machine config pool . 8.2. Creating infrastructure machine sets for production environments In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Red Hat OpenShift Service Mesh deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. 8.2.1. Creating infrastructure machine sets for different clouds Use the sample compute machine set for your cloud. 8.2.1.1. Sample YAML for a compute machine set custom resource on Alibaba Cloud This sample YAML defines a compute machine set that runs in a specified Alibaba Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<zone> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 10 spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: alibabacloud-credentials imageId: <image_id> 11 instanceType: <instance_type> 12 kind: AlibabaCloudMachineProviderConfig ramRoleName: <infrastructure_id>-role-worker 13 regionId: <region> 14 resourceGroup: 15 id: <resource_group_id> type: ID securityGroups: - tags: 16 - Key: Name Value: <infrastructure_id>-sg-<role> type: Tags systemDisk: 17 category: cloud_essd size: <disk_size> tag: 18 - Key: kubernetes.io/cluster/<infrastructure_id> Value: owned userDataSecret: name: <user_data_secret> 19 vSwitch: tags: 20 - Key: Name Value: <infrastructure_id>-vswitch-<zone> type: Tags vpcId: "" zoneId: <zone> 21 taints: 22 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 5 7 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID, <infra> node label, and zone. 11 Specify the image to use. Use an image from an existing default compute machine set for the cluster. 12 Specify the instance type you want to use for the compute machine set. 13 Specify the name of the RAM role to use for the compute machine set. Use the value that the installer populates in the default compute machine set. 14 Specify the region to place machines on. 15 Specify the resource group and type for the cluster. You can use the value that the installer populates in the default compute machine set, or specify a different one. 16 18 20 Specify the tags to use for the compute machine set. Minimally, you must include the tags shown in this example, with appropriate values for your cluster. You can include additional tags, including the tags that the installer populates in the default compute machine set it creates, as needed. 17 Specify the type and size of the root disk. Use the category value that the installer populates in the default compute machine set it creates. If required, specify a different value in gigabytes for size . 19 Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that the installer populates in the default compute machine set. 21 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 22 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine set parameters for Alibaba Cloud usage statistics The default compute machine sets that the installer creates for Alibaba Cloud clusters include nonessential tag values that Alibaba Cloud uses internally to track usage statistics. These tags are populated in the securityGroups , tag , and vSwitch parameters of the spec.template.spec.providerSpec.value list. When creating compute machine sets to deploy additional machines, you must include the required Kubernetes tags. The usage statistics tags are applied by default, even if they are not specified in the compute machine sets you create. You can also include additional tags as needed. The following YAML snippets indicate which tags in the default compute machine sets are optional and which are required. Tags in spec.template.spec.providerSpec.value.securityGroups spec: template: spec: providerSpec: value: securityGroups: - tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2 Value: ocp - Key: Name Value: <infrastructure_id>-sg-<role> 3 type: Tags 1 2 Optional: This tag is applied even when not specified in the compute machine set. 3 Required. where: <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. <role> is the node label to add. Tags in spec.template.spec.providerSpec.value.tag spec: template: spec: providerSpec: value: tag: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp 2 3 Optional: This tag is applied even when not specified in the compute machine set. 1 Required. where <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. Tags in spec.template.spec.providerSpec.value.vSwitch spec: template: spec: providerSpec: value: vSwitch: tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp - Key: Name Value: <infrastructure_id>-vswitch-<zone> 4 type: Tags 1 2 3 Optional: This tag is applied even when not specified in the compute machine set. 4 Required. where: <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. <zone> is the zone within your region to place machines on. 8.2.1.2. Sample YAML for a compute machine set custom resource on AWS This sample YAML defines a compute machine set that runs in the us-east-1a Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: infra 6 machine.openshift.io/cluster-api-machine-type: infra 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: "" 9 providerSpec: value: ami: id: ami-046fe691f52a953f9 10 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 11 instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 12 region: <region> 13 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 14 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 15 tags: - name: kubernetes.io/cluster/<infrastructure_id> 16 value: owned - name: <custom_tag_name> 17 value: <custom_tag_value> 18 userDataSecret: name: worker-user-data taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 3 5 11 14 16 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID, infra role node label, and zone. 6 7 9 Specify the infra role node label. 10 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-<role>-<zone> 17 18 Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:[email protected] . Note Custom tags can also be specified during installation in the install-config.yml file. If the install-config.yml file and the machine set include a tag with the same name data, the value for the tag from the machine set takes priority over the value for the tag in the install-config.yml file. 12 Specify the zone, for example, us-east-1a . 13 Specify the region, for example, us-east-1 . 15 Specify the infrastructure ID and zone. 19 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on AWS support non-guaranteed Spot Instances . You can save on costs by using Spot Instances at a lower price compared to On-Demand Instances on AWS. Configure Spot Instances by adding spotMarketOptions to the MachineSet YAML file. 8.2.1.3. Sample YAML for a compute machine set custom resource on Azure This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and infra is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: infra 2 machine.openshift.io/cluster-api-machine-type: infra name: <infrastructure_id>-infra-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg sshPrivateKey: "" sshPublicKey: "" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: "1" 8 taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 Specify the infra node label. 3 Specify the infrastructure ID, infra node label, and region. 4 Specify the image details for your compute machine set. If you want to use an Azure Marketplace image, see "Selecting an Azure Marketplace image". 5 Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 6 Specify the region to place machines on. 7 Optional: Specify custom tags in your machine set. Provide the tag name in <custom_tag_name> field and the corresponding tag value in <custom_tag_value> field. 8 Specify the zone within your region to place machines on. Ensure that your region supports the zone that you specify. Important If your region supports availability zones, you must specify the zone. Specifying the zone avoids volume node affinity failure when a pod requires a persistent volume attachment. To do this, you can create a compute machine set for each zone in the same region. 9 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on Azure support non-guaranteed Spot VMs . You can save on costs by using Spot VMs at a lower price compared to standard VMs on Azure. You can configure Spot VMs by adding spotVMOptions to the MachineSet YAML file. Additional resources Selecting an Azure Marketplace image 8.2.1.4. Sample YAML for a compute machine set custom resource on Azure Stack Hub This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: "" sshPublicKey: "" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: "1" 22 1 5 7 14 16 17 18 21 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 3 8 9 11 19 20 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID, <infra> node label, and region. 12 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 15 Specify the region to place machines on. 13 Specify the availability set for the cluster. 22 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. Note Machine sets running on Azure Stack Hub do not support non-guaranteed Spot VMs. 8.2.1.5. Sample YAML for a compute machine set custom resource on IBM Cloud This sample YAML defines a compute machine set that runs in a specified IBM Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 5 7 The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 16 The <infra> node label. 4 6 10 The infrastructure ID, <infra> node label, and region. 11 The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation. 12 The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify. 13 Specify the IBM Cloud instance profile . 14 Specify the region to place machines on. 15 The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID. 17 The VPC name. 18 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 19 The taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 8.2.1.6. Sample YAML for a compute machine set custom resource on GCP This sample YAML defines a compute machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" , where infra is the node label to add. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI. Infrastructure ID The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Image path The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a Sample GCP MachineSet values apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 7 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. 2 For <infra> , specify the <infra> node label. 3 Specify the path to the image that is used in current compute machine sets. To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 5 For <project_name> , specify the name of the GCP project that you use for your cluster. 6 Specifies a single service account. Multiple service accounts are not supported. 7 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on GCP support non-guaranteed preemptible VM instances . You can save on costs by using preemptible VM instances at a lower price compared to normal instances on GCP. You can configure preemptible VM instances by adding preemptible to the MachineSet YAML file. 8.2.1.7. Sample YAML for a compute machine set custom resource on Nutanix This sample YAML defines a Nutanix compute machine set that creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI ( oc ). Infrastructure ID The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> name: <infrastructure_id>-<infra>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: "" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14 taints: 15 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. 2 Specify the <infra> node label. 3 Specify the infrastructure ID, <infra> node label, and zone. 4 Annotations for the cluster autoscaler. 5 Specifies the boot type that the compute machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Valid values are Legacy , SecureBoot , or UEFI . The default is Legacy . Note You must use the Legacy boot type in OpenShift Container Platform 4.13. 6 Specify one or more Nutanix Prism categories to apply to compute machines. This stanza requires key and value parameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management . 7 Specify a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid , so there is a uuid stanza. 8 Specify the image to use. Use an image from an existing default compute machine set for the cluster. 9 Specify the amount of memory for the cluster in Gi. 10 Specify the Nutanix project that you use for your cluster. In this example, the project type is name , so there is a name stanza. 11 Specify the size of the system disk in Gi. 12 Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that installation program populates in the default compute machine set. 13 Specify the number of vCPU sockets. 14 Specify the number of vCPUs per socket. 15 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 8.2.1.8. Sample YAML for a compute machine set custom resource on RHOSP This sample YAML defines a compute machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone> 1 5 7 14 16 17 18 19 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 20 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID and <infra> node label. 11 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 12 To set a server group policy for the MachineSet, enter the value that is returned from creating a server group . For most deployments, anti-affinity or soft-anti-affinity policies are recommended. 13 Required for deployments to multiple networks. If deploying to multiple networks, this list must include the network that is used as the primarySubnet value. 15 Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file. 8.2.1.9. Sample YAML for a compute machine set custom resource on RHV This sample YAML defines a compute machine set that runs on RHV and creates nodes that are labeled with node-role.kubernetes.io/<node_role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 Selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: "" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 sparse: <boolean_value> 16 format: <raw_or_cow> 17 cpu: 18 sockets: <number_of_sockets> 19 cores: <number_of_cores> 20 threads: <number_of_threads> 21 memory_mb: <memory_size> 22 guaranteed_memory_mb: <memory_size> 23 os_disk: 24 size_gb: <disk_size> 25 storage_domain_id: <storage_domain_UUID> 26 network_interfaces: 27 vnic_profile_id: <vnic_profile_id> 28 credentialsSecret: name: ovirt-credentials 29 kind: OvirtMachineProviderSpec type: <workload_type> 30 auto_pinning_policy: <auto_pinning_policy> 31 hugepages: <hugepages> 32 affinityGroupsNames: - compute 33 userDataSecret: name: worker-user-data 1 7 9 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 10 11 13 Specify the node label to add. 4 8 12 Specify the infrastructure ID and node label. These two strings together cannot be longer than 35 characters. 5 Specify the number of machines to create. 6 Selector for the machines. 14 Specify the UUID for the RHV cluster to which this VM instance belongs. 15 Specify the RHV VM template to use to create the machine. 16 Setting this option to false enables preallocation of disks. The default is true . Setting sparse to true with format set to raw is not available for block storage domains. The raw format writes the entire virtual disk to the underlying physical disk. 17 Can be set to cow or raw . The default is cow . The cow format is optimized for virtual machines. Note Preallocating disks on file storage domains writes zeroes to the file. This might not actually preallocate disks depending on the underlying storage. 18 Optional: The CPU field contains the CPU configuration, including sockets, cores, and threads. 19 Optional: Specify the number of sockets for a VM. 20 Optional: Specify the number of cores per socket. 21 Optional: Specify the number of threads per core. 22 Optional: Specify the size of a VM's memory in MiB. 23 Optional: Specify the size of a virtual machine's guaranteed memory in MiB. This is the amount of memory that is guaranteed not to be drained by the ballooning mechanism. For more information, see Memory Ballooning and Optimization Settings Explained . Note If you are using a version earlier than RHV 4.4.8, see Guaranteed memory requirements for OpenShift on Red Hat Virtualization clusters . 24 Optional: Root disk of the node. 25 Optional: Specify the size of the bootable disk in GiB. 26 Optional: Specify the UUID of the storage domain for the compute node's disks. If none is provided, the compute node is created on the same storage domain as the control nodes. (default) 27 Optional: List of the network interfaces of the VM. If you include this parameter, OpenShift Container Platform discards all network interfaces from the template and creates new ones. 28 Optional: Specify the vNIC profile ID. 29 Specify the name of the secret object that holds the RHV credentials. 30 Optional: Specify the workload type for which the instance is optimized. This value affects the RHV VM parameter. Supported values: desktop , server (default), high_performance . high_performance improves performance on the VM. Limitations exist, for example, you cannot access the VM with a graphical console. For more information, see Configuring High Performance Virtual Machines, Templates, and Pools in the Virtual Machine Management Guide . 31 Optional: AutoPinningPolicy defines the policy that automatically sets CPU and NUMA settings, including pinning to the host for this instance. Supported values: none , resize_and_pin . For more information, see Setting NUMA Nodes in the Virtual Machine Management Guide . 32 Optional: Hugepages is the size in KiB for defining hugepages in a VM. Supported values: 2048 or 1048576 . For more information, see Configuring Huge Pages in the Virtual Machine Management Guide . 33 Optional: A list of affinity group names to be applied to the VMs. The affinity groups must exist in oVirt. Note Because RHV uses a template when creating a VM, if you do not specify a value for an optional parameter, RHV uses the value for that parameter that is specified in the template. 8.2.1.10. Sample YAML for a compute machine set custom resource on vSphere This sample YAML defines a compute machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: "<vm_network_name>" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and <infra> node label. 6 7 9 Specify the <infra> node label. 10 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 11 Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster. 12 Specify the vSphere VM template to use, such as user-5ddjd-rhcos . 13 Specify the vCenter Datacenter to deploy the compute machine set on. 14 Specify the vCenter Datastore to deploy the compute machine set on. 15 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 16 Specify the vSphere resource pool for your VMs. 17 Specify the vCenter server IP or fully qualified domain name. 8.2.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 8.2.3. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra="" 1 # ... 1 This example node selector deploys pods on infrastructure nodes by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources Moving resources to infrastructure machine sets 8.2.4. Creating a machine config pool for infrastructure machines If you need infrastructure machines to have dedicated configurations, you must create an infra pool. Procedure Add a label to the node you want to assign as the infra node with a specific label: USD oc label node <node_name> <label> USD oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra= Create a machine config pool that contains both the worker role and your custom role as machine config selector: USD cat infra.mcp.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" 2 1 Add the worker role and your custom role. 2 Add the label you added to the node as a nodeSelector . Note Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool. After you have the YAML file, you can create the machine config pool: USD oc create -f infra.mcp.yaml Check the machine configs to ensure that the infrastructure configuration rendered successfully: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d You should see a new machine config, with the rendered-infra-* prefix. Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra . Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes. Note After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration. Create a machine config: USD cat infra.mc.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra 1 Add the label you added to the node as a nodeSelector . Apply the machine config to the infra-labeled nodes: USD oc create -f infra.mc.yaml Confirm that your new machine config pool is available: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m In this example, a worker node was changed to an infra node. Additional resources See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool. 8.3. Assigning machine set resources to infrastructure nodes After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role applied are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied. However, with an infra node being assigned as a worker, there is a chance user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods you want to control. 8.3.1. Binding infrastructure node workloads using taints and tolerations If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it. Important It is recommended that you preserve the dual infra,worker label that is created for infra nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker label from the node, you must create a custom pool to manage it. A node with a label other than master or worker is not recognized by the MCO without a custom pool. Maintaining the worker label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra label communicates to the cluster that it does not count toward the total number of subscriptions. Prerequisites Configure additional MachineSet objects in your OpenShift Container Platform cluster. Procedure Add a taint to the infra node to prevent scheduling user workloads on it: Determine if the node has the taint: USD oc describe nodes <node_name> Sample output oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker ... Taints: node-role.kubernetes.io/infra:NoSchedule ... This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the step. If you have not configured a taint to prevent scheduling user workloads on it: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: node-role.kubernetes.io/infra effect: NoSchedule value: reserved ... This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule . Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node. Note If a descheduler is used, pods violating node taints could be evicted from the cluster. Add the taint with NoExecute Effect along with the above taint with NoSchedule Effect: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved ... This example places a taint on node1 that has the key node-role.kubernetes.io/infra and taint effect NoExecute . Nodes with the NoExecute effect schedule only pods that tolerate the taint. The effect will remove any existing pods from the node that do not have a matching toleration. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification: tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 value: reserved 3 - effect: NoExecute 4 key: node-role.kubernetes.io/infra 5 operator: Equal 6 value: reserved 7 1 Specify the effect that you added to the node. 2 Specify the key that you added to the node. 3 Specify the value of the key-value pair taint that you added to the node. 4 Specify the effect that you added to the node. 5 Specify the key that you added to the node. 6 Specify the Equal Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node. 7 Specify the value of the key-value pair taint that you added to the node. This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node. Note Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details. Additional resources See Controlling pod placement using the scheduler for general information on scheduling a pod to a node. See Moving resources to infrastructure machine sets for instructions on scheduling pods to infra nodes. See Understanding taints and tolerations for more details about different effects of taints. 8.4. Moving resources to infrastructure machine sets Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created by adding the infrastructure node selector, as shown: spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Applying a specific node selector to all infrastructure components causes OpenShift Container Platform to schedule those workloads on nodes with that label . 8.4.1. Moving the router You can deploy the router pod to a different compute machine set. By default, the pod is deployed to a worker node. Prerequisites Configure additional compute machine sets in your OpenShift Container Platform cluster. Procedure View the IngressController custom resource for the router Operator: USD oc get ingresscontroller default -n openshift-ingress-operator -o yaml The command output resembles the following text: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default Edit the ingresscontroller resource and change the nodeSelector to use the infra label: USD oc edit ingresscontroller default -n openshift-ingress-operator spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. Confirm that the router pod is running on the infra node. View the list of router pods and note the node name of the running pod: USD oc get pod -n openshift-ingress -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none> In this example, the running pod is on the ip-10-0-217-226.ec2.internal node. View the node status of the running pod: USD oc get node <node_name> 1 1 Specify the <node_name> that you obtained from the pod list. Example output NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.26.0 Because the role list includes infra , the pod is running on the correct node. 8.4.2. Moving the default registry You configure the registry Operator to deploy its pods to different nodes. Prerequisites Configure additional compute machine sets in your OpenShift Container Platform cluster. Procedure View the config/instance object: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ... Edit the config/instance object: USD oc edit configs.imageregistry.operator.openshift.io/cluster spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verify the registry pod has been moved to the infrastructure node. Run the following command to identify the node where the registry pod is located: USD oc get pods -o wide -n openshift-image-registry Confirm the node has the label you specified: USD oc describe node <node_name> Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list. 8.4.3. Moving the monitoring solution The monitoring stack includes multiple components, including Prometheus, Thanos Querier, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map and change the nodeSelector to use the infra label: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Watch the monitoring pods move to the new machines: USD watch 'oc get pod -n openshift-monitoring -o wide' If a component has not moved to the infra node, delete the pod with this component: USD oc delete pod -n openshift-monitoring <pod> The component from the deleted pod is re-created on the infra node. Additional resources Moving monitoring components to different nodes Using node selectors to move logging resources Using taints and tolerations to control logging pod placement | [
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<zone> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: alibabacloud-credentials imageId: <image_id> 11 instanceType: <instance_type> 12 kind: AlibabaCloudMachineProviderConfig ramRoleName: <infrastructure_id>-role-worker 13 regionId: <region> 14 resourceGroup: 15 id: <resource_group_id> type: ID securityGroups: - tags: 16 - Key: Name Value: <infrastructure_id>-sg-<role> type: Tags systemDisk: 17 category: cloud_essd size: <disk_size> tag: 18 - Key: kubernetes.io/cluster/<infrastructure_id> Value: owned userDataSecret: name: <user_data_secret> 19 vSwitch: tags: 20 - Key: Name Value: <infrastructure_id>-vswitch-<zone> type: Tags vpcId: \"\" zoneId: <zone> 21 taints: 22 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"spec: template: spec: providerSpec: value: securityGroups: - tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2 Value: ocp - Key: Name Value: <infrastructure_id>-sg-<role> 3 type: Tags",
"spec: template: spec: providerSpec: value: tag: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp",
"spec: template: spec: providerSpec: value: vSwitch: tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp - Key: Name Value: <infrastructure_id>-vswitch-<zone> 4 type: Tags",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: infra 6 machine.openshift.io/cluster-api-machine-type: infra 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" 9 providerSpec: value: ami: id: ami-046fe691f52a953f9 10 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 11 instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 12 region: <region> 13 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 14 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 15 tags: - name: kubernetes.io/cluster/<infrastructure_id> 16 value: owned - name: <custom_tag_name> 17 value: <custom_tag_value> 18 userDataSecret: name: worker-user-data taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: infra 2 machine.openshift.io/cluster-api-machine-type: infra name: <infrastructure_id>-infra-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: \"1\" 8 taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: \"1\" 22",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 7 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> name: <infrastructure_id>-<infra>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14 taints: 15 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone>",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 Selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 sparse: <boolean_value> 16 format: <raw_or_cow> 17 cpu: 18 sockets: <number_of_sockets> 19 cores: <number_of_cores> 20 threads: <number_of_threads> 21 memory_mb: <memory_size> 22 guaranteed_memory_mb: <memory_size> 23 os_disk: 24 size_gb: <disk_size> 25 storage_domain_id: <storage_domain_UUID> 26 network_interfaces: 27 vnic_profile_id: <vnic_profile_id> 28 credentialsSecret: name: ovirt-credentials 29 kind: OvirtMachineProviderSpec type: <workload_type> 30 auto_pinning_policy: <auto_pinning_policy> 31 hugepages: <hugepages> 32 affinityGroupsNames: - compute 33 userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc label node <node-name> node-role.kubernetes.io/app=\"\"",
"oc label node <node-name> node-role.kubernetes.io/infra=\"\"",
"oc get nodes",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1",
"oc label node <node_name> <label>",
"oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=",
"cat infra.mcp.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2",
"oc create -f infra.mcp.yaml",
"oc get machineconfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d",
"cat infra.mc.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra",
"oc create -f infra.mc.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m",
"oc describe nodes <node_name>",
"describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoSchedule value: reserved",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved",
"tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 value: reserved 3 - effect: NoExecute 4 key: node-role.kubernetes.io/infra 5 operator: Equal 6 value: reserved 7",
"spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get ingresscontroller default -n openshift-ingress-operator -o yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default",
"oc edit ingresscontroller default -n openshift-ingress-operator",
"spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pod -n openshift-ingress -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>",
"oc get node <node_name> 1",
"NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.26.0",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pods -o wide -n openshift-image-registry",
"oc describe node <node_name>",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute",
"watch 'oc get pod -n openshift-monitoring -o wide'",
"oc delete pod -n openshift-monitoring <pod>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/machine_management/creating-infrastructure-machinesets |
Chapter 36. Configuring huge pages | Chapter 36. Configuring huge pages Physical memory is managed in fixed-size chunks called pages. On the x86_64 architecture, supported by Red Hat Enterprise Linux 8, the default size of a memory page is 4 KB . This default page size has proved to be suitable for general-purpose operating systems, such as Red Hat Enterprise Linux, which supports many different kinds of workloads. However, specific applications can benefit from using larger page sizes in certain cases. For example, an application that works with a large and relatively fixed data set of hundreds of megabytes or even dozens of gigabytes can have performance issues when using 4 KB pages. Such data sets can require a huge amount of 4 KB pages, which can lead to overhead in the operating system and the CPU. This section provides information about huge pages available in RHEL 8 and how you can configure them. 36.1. Available huge page features With Red Hat Enterprise Linux 8, you can use huge pages for applications that work with big data sets, and improve the performance of such applications. The following are the huge page methods, which are supported in RHEL 8: HugeTLB pages HugeTLB pages are also called static huge pages. There are two ways of reserving HugeTLB pages: At boot time: It increases the possibility of success because the memory has not yet been significantly fragmented. However, on NUMA machines, the number of pages is automatically split among the NUMA nodes. For more information about parameters that influence HugeTLB page behavior at boot time, see Parameters for reserving HugeTLB pages at boot time and how to use these parameters to configure HugeTLB pages at boot time, see Configuring HugeTLB at boot time . At run time: It allows you to reserve the huge pages per NUMA node. If the run-time reservation is done as early as possible in the boot process, the probability of memory fragmentation is lower. For more information about parameters that influence HugeTLB page behavior at run time, see Parameters for reserving HugeTLB pages at run time and how to use these parameters to configure HugeTLB pages at run time, see Configuring HugeTLB at run time . Transparent HugePages (THP) With THP, the kernel automatically assigns huge pages to processes, and therefore there is no need to manually reserve the static huge pages. The following are the two modes of operation in THP: system-wide : Here, the kernel tries to assign huge pages to a process whenever it is possible to allocate the huge pages and the process is using a large contiguous virtual memory area. per-process : Here, the kernel only assigns huge pages to the memory areas of individual processes which you can specify using the madvise () system call. Note The THP feature only supports 2 MB pages. For more information about parameters that influence HugeTLB page behavior at boot time, see Enabling transparent hugepages and Disabling transparent hugepages . 36.2. Parameters for reserving HugeTLB pages at boot time Use the following parameters to influence HugeTLB page behavior at boot time. For more infomration on how to use these parameters to configure HugeTLB pages at boot time, see Configuring HugeTLB at boot time . Table 36.1. Parameters used to configure HugeTLB pages at boot time Parameter Description Default value hugepages Defines the number of persistent huge pages configured in the kernel at boot time. In a NUMA system, huge pages, that have this parameter defined, are divided equally between nodes. You can assign huge pages to specific nodes at runtime by changing the value of the nodes in the /sys/devices/system/node/node_id/hugepages/hugepages-size/nr_hugepages file. The default value is 0 . To update this value at boot, change the value of this parameter in the /proc/sys/vm/nr_hugepages file. hugepagesz Defines the size of persistent huge pages configured in the kernel at boot time. Valid values are 2 MB and 1 GB . The default value is 2 MB . default_hugepagesz Defines the default size of persistent huge pages configured in the kernel at boot time. Valid values are 2 MB and 1 GB . The default value is 2 MB . 36.3. Configuring HugeTLB at boot time The page size, which the HugeTLB subsystem supports, depends on the architecture. The x86_64 architecture supports 2 MB huge pages and 1 GB gigantic pages. This procedure describes how to reserve a 1 GB page at boot time. Procedure To create a HugeTLB pool for 1 GB pages, enable the default_hugepagesz=1G and hugepagesz=1G kernel options: Create a new file called hugetlb-gigantic-pages.service in the /usr/lib/systemd/system/ directory and add the following content: Create a new file called hugetlb-reserve-pages.sh in the /usr/lib/systemd/ directory and add the following content: While adding the following content, replace number_of_pages with the number of 1GB pages you want to reserve, and node with the name of the node on which to reserve these pages. For example, to reserve two 1 GB pages on node0 and one 1GB page on node1 , replace the number_of_pages with 2 for node0 and 1 for node1 : Create an executable script: Enable early boot reservation: Note You can try reserving more 1 GB pages at runtime by writing to nr_hugepages at any time. However, to prevent failures due to memory fragmentation, reserve 1 GB pages early during the boot process. Reserving static huge pages can effectively reduce the amount of memory available to the system, and prevents it from properly utilizing its full memory capacity. Although a properly sized pool of reserved huge pages can be beneficial to applications that utilize it, an oversized or unused pool of reserved huge pages will eventually be detrimental to overall system performance. When setting a reserved huge page pool, ensure that the system can properly utilize its full memory capacity. Additional resources systemd.service(5) man page on your system /usr/share/doc/kernel-doc-kernel_version/Documentation/vm/hugetlbpage.txt file 36.4. Parameters for reserving HugeTLB pages at run time Use the following parameters to influence HugeTLB page behavior at run time. For more information about how to use these parameters to configure HugeTLB pages at run time, see Configuring HugeTLB at run time . Table 36.2. Parameters used to configure HugeTLB pages at run time Parameter Description File name nr_hugepages Defines the number of huge pages of a specified size assigned to a specified NUMA node. /sys/devices/system/node/node_id/hugepages/hugepages-size/nr_hugepages nr_overcommit_hugepages Defines the maximum number of additional huge pages that can be created and used by the system through overcommitting memory. Writing any non-zero value into this file indicates that the system obtains that number of huge pages from the kernel's normal page pool if the persistent huge page pool is exhausted. As these surplus huge pages become unused, they are then freed and returned to the kernel's normal page pool. /proc/sys/vm/nr_overcommit_hugepages 36.5. Configuring HugeTLB at run time This procedure describes how to add 20 2048 kB huge pages to node2 . To reserve pages based on your requirements, replace: 20 with the number of huge pages you wish to reserve, 2048kB with the size of the huge pages, node2 with the node on which you wish to reserve the pages. Procedure Display the memory statistics: Add the number of huge pages of a specified size to the node: Verification Ensure that the number of huge pages are added: Additional resources numastat(8) man page on your system 36.6. Managing transparent hugepages Transparent hugepages (THP) are enabled by default in Red Hat Enterprise Linux 8. However, you can enable, disable, or set the transparent hugepages to madvise with runtime configuration, TuneD profiles, kernel command line parameters, or systemd unit file. 36.6.1. Managing transparent hugepages with runtime configuration Transparent hugepages (THP) can be managed at runtime to optimize memory usage. The runtime configuration is not persistent across system reboots. Procedure Check the status of THP: Configure THP. Enabling THP: Disabling THP: Setting THP to madvise : To prevent applications from allocating more memory resources than necessary, disable the system-wide transparent hugepages and only enable them for the applications that explicitly request it through the madvise system call. Note Sometimes, providing low latency to short-lived allocations has higher priority than immediately achieving the best performance with long-lived allocations. In such cases, you can disable direct compaction while leaving THP enabled. Direct compaction is a synchronous memory compaction during the huge page allocation. Disabling direct compaction provides no guarantee of saving memory, but can decrease the risk of higher latencies during frequent page faults. Also, disabling direct compaction allows synchronous compaction of Virtual Memory Areas (VMAs) highlighted in madvise only. Note that if the workload benefits significantly from THP, the performance decreases. Disable direct compaction: USD echo never > /sys/kernel/mm/transparent_hugepage/defrag Additional resources madvise(2) man page on your system. 36.6.2. Managing transparent hugepages with TuneD profiles You can manage transparent hugepages (THP) by using TuneD profiles. The tuned.conf file provides the configuration of TuneD profiles. This configuration is persistent across system reboots. Prerequisites TuneD package is installed. TuneD service is enabled. Procedure Copy the active profile file to the same directory: Edit the tune.conf file: To enable THP, add the line: To disable THP, add the line: To set THP to madvise , add the line: Restart the TuneD service: Set the new profile active: Verification Verify that the new profile is active: Verify that the required mode of THP is set: 36.6.3. Managing transparent hugepages with kernel command line parameters You can manage transparent hugepages (THP) at boot time by modifying kernel parameters. This configuration is persistent across system reboots. Prerequisite You have root permissions on the system. Procedure Get the current kernel command line parameters: Configure THP by adding kernel parameters. To enable THP: To disable THP: To set THP to madvise : Reboot the system for changes to take effect: Verification To verify the status of THP, view the following files: 36.6.4. Managing transparent hugepages with a systemd unit file You can manage transparent hugepages (THP) at system startup by using systemd unit files. By creating a systemd service, you get consistent THP configuration across system reboots. Prerequisite You have root permissions on the system. Procedure Create new systemd service files for enabling, disabling and setting THP to madvise . For example, /etc/systemd/system/disable-thp.service . Configure THP by adding the following contents to a new systemd service file. To enable THP, add the following content to <new_thp_file>.service file: To disable THP, add the following content to <new_thp_file>.service file: To set THP to madvise , add the following content to <new_thp_file>.service file: Enable and start the service: Verification To verify the status of THP, view the following files: 36.6.5. Additional resources You can also disable Transparent Huge Pages (THP) by setting up TuneD profile or using predefined TuneD profiles. See TuneD profiles distributed with RHEL and Available TuneD plug-ins . 36.7. Impact of page size on translation lookaside buffer size Reading address mappings from the page table is time-consuming and resource-expensive, so CPUs are built with a cache for recently-used addresses, called the Translation Lookaside Buffer (TLB). However, the default TLB can only cache a certain number of address mappings. If a requested address mapping is not in the TLB, called a TLB miss, the system still needs to read the page table to determine the physical to virtual address mapping. Because of the relationship between application memory requirements and the size of pages used to cache address mappings, applications with large memory requirements are more likely to suffer performance degradation from TLB misses than applications with minimal memory requirements. It is therefore important to avoid TLB misses wherever possible. Both HugeTLB and Transparent Huge Page features allow applications to use pages larger than 4 KB . This allows addresses stored in the TLB to reference more memory, which reduces TLB misses and improves application performance. | [
"grubby --update-kernel=ALL --args=\"default_hugepagesz=1G hugepagesz=1G\"",
"[Unit] Description=HugeTLB Gigantic Pages Reservation DefaultDependencies=no Before=dev-hugepages.mount ConditionPathExists=/sys/devices/system/node ConditionKernelCommandLine=hugepagesz=1G [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/lib/systemd/hugetlb-reserve-pages.sh [Install] WantedBy=sysinit.target",
"#!/bin/sh nodes_path=/sys/devices/system/node/ if [ ! -d USDnodes_path ]; then echo \"ERROR: USDnodes_path does not exist\" exit 1 fi reserve_pages() { echo USD1 > USDnodes_path/USD2/hugepages/hugepages-1048576kB/nr_hugepages } reserve_pages number_of_pages node",
"reserve_pages 2 node0 reserve_pages 1 node1",
"chmod +x /usr/lib/systemd/hugetlb-reserve-pages.sh",
"systemctl enable hugetlb-gigantic-pages",
"numastat -cm | egrep 'Node|Huge' Node 0 Node 1 Node 2 Node 3 Total add AnonHugePages 0 2 0 8 10 HugePages_Total 0 0 0 0 0 HugePages_Free 0 0 0 0 0 HugePages_Surp 0 0 0 0 0",
"echo 20 > /sys/devices/system/node/node2/hugepages/hugepages- 2048kB /nr_hugepages",
"numastat -cm | egrep 'Node|Huge' Node 0 Node 1 Node 2 Node 3 Total AnonHugePages 0 2 0 8 10 HugePages_Total 0 0 40 0 40 HugePages_Free 0 0 40 0 40 HugePages_Surp 0 0 0 0 0",
"cat /sys/kernel/mm/transparent_hugepage/enabled",
"echo always > /sys/kernel/mm/transparent_hugepage/enabled",
"echo never > /sys/kernel/mm/transparent_hugepage/enabled",
"echo madvise > /sys/kernel/mm/transparent_hugepage/enabled",
"sudo cp -R /usr/lib/tuned/ my_profile /usr/lib/tuned/ my_copied_profile",
"sudo vi /usr/lib/tuned/ my_copied_profile /tuned.conf",
"[bootloader] cmdline = transparent_hugepage=always",
"[bootloader] cmdline = transparent_hugepage=never",
"[bootloader] cmdline = transparent_hugepage=madvise",
"sudo systemctl restart tuned",
"sudo tuned-adm profile my_copied_profile",
"sudo tuned-adm active",
"cat /sys/kernel/mm/transparent_hugepage/enabled",
"grubby --info=USD(grubby --default-kernel) kernel=\"/boot/vmlinuz-4.18.0-553.el8_10.x86_64\" args=\"ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=UUID= XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXXX console=tty0 console=ttyS0\" root=\"UUID= XXXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX \" initrd=\"/boot/initramfs-4.18.0-553.el8_10.x86_64.img\" title=\"Red Hat Enterprise Linux (4.18.0-553.el8_10.x86_64) 8.10 (Ootpa)\" id=\" XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX -4.18.0-553.el8_10.x86_64\"",
"grubby --args=\"transparent_hugepage=always\" --update-kernel=DEFAULT",
"grubby --args=\"transparent_hugepage=never\" --update-kernel=DEFAULT",
"grubby --args=\"transparent_hugepage=madvise\" --update-kernel=DEFAULT",
"reboot",
"cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never]",
"grep AnonHugePages: /proc/meminfo AnonHugePages: 0 kB",
"grep nr_anon_transparent_hugepages /proc/vmstat nr_anon_transparent_hugepages 0",
"[Unit] Description=Enable Transparent Hugepages After=local-fs.target Before=sysinit.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/sh -c 'echo always > /sys/kernel/mm/transparent_hugepage/enabled [Install] WantedBy=multi-user.target",
"[Unit] Description=Disable Transparent Hugepages After=local-fs.target Before=sysinit.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/sh -c 'echo never > /sys/kernel/mm/transparent_hugepage/enabled [Install] WantedBy=multi-user.target",
"[Unit] Description=Madvise Transparent Hugepages After=local-fs.target Before=sysinit.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/sh -c 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled [Install] WantedBy=multi-user.target",
"systemctl enable <new_thp_file>.service",
"systemctl start <new_thp_file>.service",
"cat /sys/kernel/mm/transparent_hugepage/enabled"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/configuring-huge-pages_monitoring-and-managing-system-status-and-performance |
Chapter 8. Creating infrastructure machine sets | Chapter 8. Creating infrastructure machine sets Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. 8.1. OpenShift Container Platform infrastructure components The following infrastructure workloads do not incur OpenShift Container Platform worker subscriptions: Kubernetes and OpenShift Container Platform control plane services that run on masters The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Service brokers Red Hat Quay Red Hat OpenShift Data Foundation Red Hat Advanced Cluster Manager Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. To create an infrastructure node, you can use a machine set , label the node , or use a machine config pool . 8.2. Creating infrastructure machine sets for production environments In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. 8.2.1. Creating machine sets for different clouds Use the sample machine set for your cloud. 8.2.1.1. Sample YAML for a machine set custom resource on Alibaba Cloud This sample YAML defines a machine set that runs in a specified Alibaba Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<zone> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 10 spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: alibabacloud-credentials imageId: <image_id> 11 instanceType: <instance_type> 12 kind: AlibabaCloudMachineProviderConfig ramRoleName: <infrastructure_id>-role-worker 13 regionId: <region> 14 resourceGroup: 15 id: <resource_group_id> type: ID securityGroups: - tags: 16 - Key: Name Value: <infrastructure_id>-sg-<role> type: Tags systemDisk: 17 category: cloud_essd size: <disk_size> tag: 18 - Key: kubernetes.io/cluster/<infrastructure_id> Value: owned userDataSecret: name: <user_data_secret> 19 vSwitch: tags: 20 - Key: Name Value: <infrastructure_id>-vswitch-<zone> type: Tags vpcId: "" zoneId: <zone> 21 taints: 22 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 5 7 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID, <infra> node label, and zone. 11 Specify the image to use. Use an image from an existing default machine set for the cluster. 12 Specify the instance type you want to use for the machine set. 13 Specify the name of the RAM role to use for the machine set. Use the value that the installer populates in the default machine set. 14 Specify the region to place machines on. 15 Specify the resource group and type for the cluster. You can use the value that the installer populates in the default machine set, or specify a different one. 16 18 20 Specify the tags to use for the machine set. Minimally, you must include the tags shown in this example, with appropriate values for your cluster. You can include additional tags, including the tags that the installer populates in the default machine set it creates, as needed. 17 Specify the type and size of the root disk. Use the category value that the installer populates in the default machine set it creates. If required, specify a different value in gigabytes for size . 19 Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that the installer populates in the default machine set. 21 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 22 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine set parameters for Alibaba Cloud usage statistics The default machine sets that the installer creates for Alibaba Cloud clusters include nonessential tag values that Alibaba Cloud uses internally to track usage statistics. These tags are populated in the securityGroups , tag , and vSwitch parameters of the spec.template.spec.providerSpec.value list. When creating machine sets to deploy additional machines, you must include the required Kubernetes tags. The usage statistics tags are applied by default, even if they are not specified in the machine sets you create. You can also include additional tags as needed. The following YAML snippets indicate which tags in the default machine sets are optional and which are required. Tags in spec.template.spec.providerSpec.value.securityGroups spec: template: spec: providerSpec: value: securityGroups: - tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2 Value: ocp - Key: Name Value: <infrastructure_id>-sg-<role> 3 type: Tags 1 2 Optional: This tag is applied even when not specified in the machine set. 3 Required. where: <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. <role> is the node label to add. Tags in spec.template.spec.providerSpec.value.tag spec: template: spec: providerSpec: value: tag: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp 2 3 Optional: This tag is applied even when not specified in the machine set. 1 Required. where <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. Tags in spec.template.spec.providerSpec.value.vSwitch spec: template: spec: providerSpec: value: vSwitch: tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp - Key: Name Value: <infrastructure_id>-vswitch-<zone> 4 type: Tags 1 2 3 Optional: This tag is applied even when not specified in the machine set. 4 Required. where: <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. <zone> is the zone within your region to place machines on. 8.2.1.2. Sample YAML for a machine set custom resource on AWS This sample YAML defines a machine set that runs in the us-east-1a Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: "" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: ami: id: ami-046fe691f52a953f9 11 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 12 instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 13 region: <region> 14 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 15 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 16 tags: - name: kubernetes.io/cluster/<infrastructure_id> 17 value: owned userDataSecret: name: worker-user-data 1 3 5 12 15 17 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID, <infra> node label, and zone. 6 7 9 Specify the <infra> node label. 10 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 11 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) AMI for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-worker-<zone> 13 Specify the zone, for example, us-east-1a . 14 Specify the region, for example, us-east-1 . 16 Specify the infrastructure ID and zone. Machine sets running on AWS support non-guaranteed Spot Instances . You can save on costs by using Spot Instances at a lower price compared to On-Demand Instances on AWS. Configure Spot Instances by adding spotMarketOptions to the MachineSet YAML file. 8.2.1.3. Sample YAML for a machine set custom resource on Azure This sample YAML defines a machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/infra: "" 11 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 12 offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: "" sshPublicKey: "" tags: - name: <custom_tag_name> 17 value: <custom_tag_value> 18 subnet: <infrastructure_id>-<role>-subnet 19 20 userDataSecret: name: worker-user-data 21 vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet 22 zone: "1" 23 taints: 24 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 5 7 15 16 19 22 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 3 8 9 11 20 21 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID, <infra> node label, and region. 12 Specify the image details for your machine set. If you want to use an Azure Marketplace image, see "Selecting an Azure Marketplace image". 13 Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 14 Specify the region to place machines on. 23 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 17 18 Optional: Specify custom tags in your machine set. Provide the tag name in <custom_tag_name> field and the corresponding tag value in <custom_tag_value> field. 24 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on Azure support non-guaranteed Spot VMs . You can save on costs by using Spot VMs at a lower price compared to standard VMs on Azure. You can configure Spot VMs by adding spotVMOptions to the MachineSet YAML file. Additional resources Selecting an Azure Marketplace image 8.2.1.4. Sample YAML for a machine set custom resource on Azure Stack Hub This sample YAML defines a machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: "" sshPublicKey: "" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: "1" 22 1 5 7 14 16 17 18 21 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 3 8 9 11 19 20 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID, <infra> node label, and region. 12 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 15 Specify the region to place machines on. 13 Specify the availability set for the cluster. 22 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. Note Machine sets running on Azure Stack Hub do not support non-guaranteed Spot VMs. 8.2.1.5. Sample YAML for a machine set custom resource on IBM Cloud This sample YAML defines a machine set that runs in a specified IBM Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 5 7 The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 16 The <infra> node label. 4 6 10 The infrastructure ID, <infra> node label, and region. 11 The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation. 12 The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify. 13 Specify the IBM Cloud instance profile . 14 Specify the region to place machines on. 15 The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID. 17 The VPC name. 18 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 19 The taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 8.2.1.6. Sample YAML for a machine set custom resource on GCP This sample YAML defines a machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 6 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 For <infra> , specify the <infra> node label. 3 Specify the path to the image that is used in current machine sets. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-48-x86-64-202206140145 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-48-x86-64-202206140145 4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 5 For <project_name> , specify the name of the GCP project that you use for your cluster. 6 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on GCP support non-guaranteed preemptible VM instances . You can save on costs by using preemptible VM instances at a lower price compared to normal instances on GCP. You can configure preemptible VM instances by adding preemptible to the MachineSet YAML file. 8.2.1.7. Sample YAML for a machine set custom resource on Nutanix This sample YAML defines a Nutanix machine set that creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<zone> 4 namespace: openshift-machine-api annotations: 5 machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 6 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 7 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 8 machine.openshift.io/cluster-api-machine-role: <infra> 9 machine.openshift.io/cluster-api-machine-type: <infra> 10 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 11 spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: machine.openshift.io/v1 cluster: type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 12 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 13 subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 14 userDataSecret: name: <user_data_secret> 15 vcpuSockets: 4 16 vcpusPerSocket: 1 17 taints: 18 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 6 8 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 9 10 Specify the <infra> node label. 4 7 11 Specify the infrastructure ID, <infra> node label, and zone. 5 Annotations for the cluster autoscaler. 12 Specify the image to use. Use an image from an existing default machine set for the cluster. 13 Specify the amount of memory for the cluster in Gi. 14 Specify the size of the system disk in Gi. 15 Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that the installer populates in the default machine set. 16 Specify the number of vCPU sockets. 17 Specify the number of vCPUs per socket. 18 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 8.2.1.8. Sample YAML for a machine set custom resource on RHOSP This sample YAML defines a machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone> 1 5 7 14 16 17 18 19 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 20 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID and <infra> node label. 11 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 12 To set a server group policy for the MachineSet, enter the value that is returned from creating a server group . For most deployments, anti-affinity or soft-anti-affinity policies are recommended. 13 Required for deployments to multiple networks. If deploying to multiple networks, this list must include the network that is used as the primarySubnet value. 15 Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file. 8.2.1.9. Sample YAML for a machine set custom resource on RHV This sample YAML defines a machine set that runs on RHV and creates nodes that are labeled with node-role.kubernetes.io/<node_role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 Selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: "" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 sparse: <boolean_value> 16 format: <raw_or_cow> 17 cpu: 18 sockets: <number_of_sockets> 19 cores: <number_of_cores> 20 threads: <number_of_threads> 21 memory_mb: <memory_size> 22 guaranteed_memory_mb: <memory_size> 23 os_disk: 24 size_gb: <disk_size> 25 storage_domain_id: <storage_domain_UUID> 26 network_interfaces: 27 vnic_profile_id: <vnic_profile_id> 28 credentialsSecret: name: ovirt-credentials 29 kind: OvirtMachineProviderSpec type: <workload_type> 30 auto_pinning_policy: <auto_pinning_policy> 31 hugepages: <hugepages> 32 affinityGroupsNames: - compute 33 userDataSecret: name: worker-user-data 1 7 9 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 10 11 13 Specify the node label to add. 4 8 12 Specify the infrastructure ID and node label. These two strings together cannot be longer than 35 characters. 5 Specify the number of machines to create. 6 Selector for the machines. 14 Specify the UUID for the RHV cluster to which this VM instance belongs. 15 Specify the RHV VM template to use to create the machine. 16 Setting this option to false enables preallocation of disks. The default is true . Setting sparse to true with format set to raw is not available for block storage domains. The raw format writes the entire virtual disk to the underlying physical disk. 17 Can be set to cow or raw . The default is cow . The cow format is optimized for virtual machines. Note Preallocating disks on file storage domains writes zeroes to the file. This might not actually preallocate disks depending on the underlying storage. 18 Optional: The CPU field contains the CPU configuration, including sockets, cores, and threads. 19 Optional: Specify the number of sockets for a VM. 20 Optional: Specify the number of cores per socket. 21 Optional: Specify the number of threads per core. 22 Optional: Specify the size of a VM's memory in MiB. 23 Optional: Specify the size of a virtual machine's guaranteed memory in MiB. This is the amount of memory that is guaranteed not to be drained by the ballooning mechanism. For more information, see Memory Ballooning and Optimization Settings Explained . Note If you are using a version earlier than RHV 4.4.8, see Guaranteed memory requirements for OpenShift on Red Hat Virtualization clusters . 24 Optional: Root disk of the node. 25 Optional: Specify the size of the bootable disk in GiB. 26 Optional: Specify the UUID of the storage domain for the compute node's disks. If none is provided, the compute node is created on the same storage domain as the control nodes. (default) 27 Optional: List of the network interfaces of the VM. If you include this parameter, OpenShift Container Platform discards all network interfaces from the template and creates new ones. 28 Optional: Specify the vNIC profile ID. 29 Specify the name of the secret object that holds the RHV credentials. 30 Optional: Specify the workload type for which the instance is optimized. This value affects the RHV VM parameter. Supported values: desktop , server (default), high_performance . high_performance improves performance on the VM. Limitations exist, for example, you cannot access the VM with a graphical console. For more information, see Configuring High Performance Virtual Machines, Templates, and Pools in the Virtual Machine Management Guide . 31 Optional: AutoPinningPolicy defines the policy that automatically sets CPU and NUMA settings, including pinning to the host for this instance. Supported values: none , resize_and_pin . For more information, see Setting NUMA Nodes in the Virtual Machine Management Guide . 32 Optional: Hugepages is the size in KiB for defining hugepages in a VM. Supported values: 2048 or 1048576 . For more information, see Configuring Huge Pages in the Virtual Machine Management Guide . 33 Optional: A list of affinity group names to be applied to the VMs. The affinity groups must exist in oVirt. Note Because RHV uses a template when creating a VM, if you do not specify a value for an optional parameter, RHV uses the value for that parameter that is specified in the template. 8.2.1.10. Sample YAML for a machine set custom resource on vSphere This sample YAML defines a machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: "<vm_network_name>" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and <infra> node label. 6 7 9 Specify the <infra> node label. 10 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 11 Specify the vSphere VM network to deploy the machine set to. This VM network must be where other compute machines reside in the cluster. 12 Specify the vSphere VM template to use, such as user-5ddjd-rhcos . 13 Specify the vCenter Datacenter to deploy the machine set on. 14 Specify the vCenter Datastore to deploy the machine set on. 15 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 16 Specify the vSphere resource pool for your VMs. 17 Specify the vCenter server IP or fully qualified domain name. 8.2.2. Creating a machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again. 8.2.3. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1 # ... 1 This example node selector deploys pods on nodes in the us-east-1 region by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources Moving resources to infrastructure machine sets 8.2.4. Creating a machine config pool for infrastructure machines If you need infrastructure machines to have dedicated configurations, you must create an infra pool. Procedure Add a label to the node you want to assign as the infra node with a specific label: USD oc label node <node_name> <label> USD oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra= Create a machine config pool that contains both the worker role and your custom role as machine config selector: USD cat infra.mcp.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" 2 1 Add the worker role and your custom role. 2 Add the label you added to the node as a nodeSelector . Note Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool. After you have the YAML file, you can create the machine config pool: USD oc create -f infra.mcp.yaml Check the machine configs to ensure that the infrastructure configuration rendered successfully: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d You should see a new machine config, with the rendered-infra-* prefix. Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra . Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes. Note After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration. Create a machine config: USD cat infra.mc.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra 1 Add the label you added to the node as a nodeSelector . Apply the machine config to the infra-labeled nodes: USD oc create -f infra.mc.yaml Confirm that your new machine config pool is available: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m In this example, a worker node was changed to an infra node. Additional resources See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool. 8.3. Assigning machine set resources to infrastructure nodes After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role applied are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied. However, with an infra node being assigned as a worker, there is a chance user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods you want to control. 8.3.1. Binding infrastructure node workloads using taints and tolerations If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it. Important It is recommended that you preserve the dual infra,worker label that is created for infra nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker label from the node, you must create a custom pool to manage it. A node with a label other than master or worker is not recognized by the MCO without a custom pool. Maintaining the worker label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra label communicates to the cluster that it does not count toward the total number of subscriptions. Prerequisites Configure additional MachineSet objects in your OpenShift Container Platform cluster. Procedure Add a taint to the infra node to prevent scheduling user workloads on it: Determine if the node has the taint: USD oc describe nodes <node_name> Sample output oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker ... Taints: node-role.kubernetes.io/infra:NoSchedule ... This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the step. If you have not configured a taint to prevent scheduling user workloads on it: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved ... This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule . Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node. Note If a descheduler is used, pods violating node taints could be evicted from the cluster. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification: tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3 value: reserved 4 1 Specify the effect that you added to the node. 2 Specify the key that you added to the node. 3 Specify the Exists Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node. 4 Specify the value of the key-value pair taint that you added to the node. This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node. Note Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details. Additional resources See Controlling pod placement using the scheduler for general information on scheduling a pod to a node. See Moving resources to infrastructure machine sets for instructions on scheduling pods to infra nodes. 8.4. Moving resources to infrastructure machine sets Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created by adding the infrastructure node selector, as shown: spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Applying a specific node selector to all infrastructure components causes OpenShift Container Platform to schedule those workloads on nodes with that label . 8.4.1. Moving the router You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the IngressController custom resource for the router Operator: USD oc get ingresscontroller default -n openshift-ingress-operator -o yaml The command output resembles the following text: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default Edit the ingresscontroller resource and change the nodeSelector to use the infra label: USD oc edit ingresscontroller default -n openshift-ingress-operator spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. Confirm that the router pod is running on the infra node. View the list of router pods and note the node name of the running pod: USD oc get pod -n openshift-ingress -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none> In this example, the running pod is on the ip-10-0-217-226.ec2.internal node. View the node status of the running pod: USD oc get node <node_name> 1 1 Specify the <node_name> that you obtained from the pod list. Example output NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.24.0 Because the role list includes infra , the pod is running on the correct node. 8.4.2. Moving the default registry You configure the registry Operator to deploy its pods to different nodes. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the config/instance object: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ... Edit the config/instance object: USD oc edit configs.imageregistry.operator.openshift.io/cluster spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verify the registry pod has been moved to the infrastructure node. Run the following command to identify the node where the registry pod is located: USD oc get pods -o wide -n openshift-image-registry Confirm the node has the label you specified: USD oc describe node <node_name> Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list. 8.4.3. Moving the monitoring solution The monitoring stack includes multiple components, including Prometheus, Thanos Querier, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map. Procedure Edit the cluster-monitoring-config config map and change the nodeSelector to use the infra label: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Watch the monitoring pods move to the new machines: USD watch 'oc get pod -n openshift-monitoring -o wide' If a component has not moved to the infra node, delete the pod with this component: USD oc delete pod -n openshift-monitoring <pod> The component from the deleted pod is re-created on the infra node. 8.4.4. Moving logging resources You can configure the Red Hat OpenShift Logging Operator to deploy the pods for logging components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Red Hat OpenShift Logging Operator pod from its installed location. For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements. Prerequisites You have installed the Red Hat OpenShift Logging Operator and the OpenShift Elasticsearch Operator. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance Example ClusterLogging CR apiVersion: logging.openshift.io/v1 kind: ClusterLogging # ... spec: logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana # ... 1 2 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verification To verify that a component has moved, you can use the oc get pod -o wide command. For example: You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node: USD oc get pod kibana-5b8bdf44f9-ccpq9 -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none> You want to move the Kibana pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.24.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.24.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.24.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.24.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.24.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.24.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.24.0 Note that the node has a node-role.kubernetes.io/infra: '' label: USD oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml Example output kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: '' ... To move the Kibana pod, edit the ClusterLogging CR to add a node selector: apiVersion: logging.openshift.io/v1 kind: ClusterLogging # ... spec: # ... visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana 1 Add a node selector to match the label in the node specification. After you save the CR, the current Kibana pod is terminated and new pod is deployed: USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m collector-42dzz 1/1 Running 0 28m collector-d74rq 1/1 Running 0 28m collector-m5vr9 1/1 Running 0 28m collector-nkxl7 1/1 Running 0 28m collector-pdvqb 1/1 Running 0 28m collector-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node: USD oc get pod kibana-7d85dcffc8-bfpfp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none> After a few moments, the original Kibana pod is removed. USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m collector-42dzz 1/1 Running 0 29m collector-d74rq 1/1 Running 0 29m collector-m5vr9 1/1 Running 0 29m collector-nkxl7 1/1 Running 0 29m collector-pdvqb 1/1 Running 0 29m collector-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s Additional resources See the monitoring documentation for the general instructions on moving OpenShift Container Platform components. | [
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<zone> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: alibabacloud-credentials imageId: <image_id> 11 instanceType: <instance_type> 12 kind: AlibabaCloudMachineProviderConfig ramRoleName: <infrastructure_id>-role-worker 13 regionId: <region> 14 resourceGroup: 15 id: <resource_group_id> type: ID securityGroups: - tags: 16 - Key: Name Value: <infrastructure_id>-sg-<role> type: Tags systemDisk: 17 category: cloud_essd size: <disk_size> tag: 18 - Key: kubernetes.io/cluster/<infrastructure_id> Value: owned userDataSecret: name: <user_data_secret> 19 vSwitch: tags: 20 - Key: Name Value: <infrastructure_id>-vswitch-<zone> type: Tags vpcId: \"\" zoneId: <zone> 21 taints: 22 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"spec: template: spec: providerSpec: value: securityGroups: - tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2 Value: ocp - Key: Name Value: <infrastructure_id>-sg-<role> 3 type: Tags",
"spec: template: spec: providerSpec: value: tag: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp",
"spec: template: spec: providerSpec: value: vSwitch: tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp - Key: Name Value: <infrastructure_id>-vswitch-<zone> 4 type: Tags",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: ami: id: ami-046fe691f52a953f9 11 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 12 instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 13 region: <region> 14 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 15 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 16 tags: - name: kubernetes.io/cluster/<infrastructure_id> 17 value: owned userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-worker-<zone>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/infra: \"\" 11 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 12 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 17 value: <custom_tag_value> 18 subnet: <infrastructure_id>-<role>-subnet 19 20 userDataSecret: name: worker-user-data 21 vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet 22 zone: \"1\" 23 taints: 24 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: \"1\" 22",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 6 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<zone> 4 namespace: openshift-machine-api annotations: 5 machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 6 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 7 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 8 machine.openshift.io/cluster-api-machine-role: <infra> 9 machine.openshift.io/cluster-api-machine-type: <infra> 10 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 11 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 cluster: type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 12 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 13 subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 14 userDataSecret: name: <user_data_secret> 15 vcpuSockets: 4 16 vcpusPerSocket: 1 17 taints: 18 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone>",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 Selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 sparse: <boolean_value> 16 format: <raw_or_cow> 17 cpu: 18 sockets: <number_of_sockets> 19 cores: <number_of_cores> 20 threads: <number_of_threads> 21 memory_mb: <memory_size> 22 guaranteed_memory_mb: <memory_size> 23 os_disk: 24 size_gb: <disk_size> 25 storage_domain_id: <storage_domain_UUID> 26 network_interfaces: 27 vnic_profile_id: <vnic_profile_id> 28 credentialsSecret: name: ovirt-credentials 29 kind: OvirtMachineProviderSpec type: <workload_type> 30 auto_pinning_policy: <auto_pinning_policy> 31 hugepages: <hugepages> 32 affinityGroupsNames: - compute 33 userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc label node <node-name> node-role.kubernetes.io/app=\"\"",
"oc label node <node-name> node-role.kubernetes.io/infra=\"\"",
"oc get nodes",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1",
"oc label node <node_name> <label>",
"oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=",
"cat infra.mcp.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2",
"oc create -f infra.mcp.yaml",
"oc get machineconfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d",
"cat infra.mc.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra",
"oc create -f infra.mc.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m",
"oc describe nodes <node_name>",
"describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved",
"tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3 value: reserved 4",
"spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get ingresscontroller default -n openshift-ingress-operator -o yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default",
"oc edit ingresscontroller default -n openshift-ingress-operator",
"spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pod -n openshift-ingress -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>",
"oc get node <node_name> 1",
"NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.24.0",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pods -o wide -n openshift-image-registry",
"oc describe node <node_name>",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute",
"watch 'oc get pod -n openshift-monitoring -o wide'",
"oc delete pod -n openshift-monitoring <pod>",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pod kibana-5b8bdf44f9-ccpq9 -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.24.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.24.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.24.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.24.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.24.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.24.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.24.0",
"oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m collector-42dzz 1/1 Running 0 28m collector-d74rq 1/1 Running 0 28m collector-m5vr9 1/1 Running 0 28m collector-nkxl7 1/1 Running 0 28m collector-pdvqb 1/1 Running 0 28m collector-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s",
"oc get pod kibana-7d85dcffc8-bfpfp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m collector-42dzz 1/1 Running 0 29m collector-d74rq 1/1 Running 0 29m collector-m5vr9 1/1 Running 0 29m collector-nkxl7 1/1 Running 0 29m collector-pdvqb 1/1 Running 0 29m collector-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/machine_management/creating-infrastructure-machinesets |
B.3. Identity Management Clients | B.3. Identity Management Clients This section describes common client problems for IdM in Red Hat Enterprise Linux. Additional resources: To validate your /etc/sssd.conf file, see SSSD Configuration Validation in the System-Level Authentication Guide . B.3.1. The Client Is Unable to Resolve Reverse Lookups when Using an External DNS An external DNS server returns a wrong host name for the IdM server. The following errors related to the IdM server appear in the Kerberos database: What this means: The external DNS name server returns the wrong host name for the IdM server or returns no answer at all. To fix the problem: Verify your DNS configuration, and make sure the DNS domains used by IdM are properly delegated. See Section 2.1.5, "Host Name and DNS Configuration" for details. Verify your reverse (PTR) DNS records settings. See Chapter 33, Managing DNS for details. B.3.2. The Client Is Not Added to the DNS Zone When running the ipa-client-install utility, the nsupdate utility fails to add the client to the DNS zone. What this means: The DNS configuration is incorrect. To fix the problem: Verify your configuration for DNS delegation from the parent zone to IdM. See Section 2.1.5, "Host Name and DNS Configuration" for details. Make sure that dynamic updates are allowed in the IdM zone. See Section 33.5.1, "Enabling Dynamic DNS Updates" for details. For details on managing DNS in IdM, see Section 33.7, "Managing Reverse DNS Zones" . For details on managing DNS in Red Hat Enterprise Linux, see Editing Zone Files in the Networking Guide . B.3.3. Client Connection Problems Users cannot log in to a machine. Attempts to access user and group information, such as with the getent passwd admin command, fail. What this means: Client authentication problems often indicate problems with the System Security Services Daemon (SSSD) service. To fix the problem: Examine the SSSD logs in the /var/log/sssd/ directory. The directory includes a log file for the DNS domain, such as sssd_ example.com .log . If the logs do not include enough information, increase the log level: In the /etc/sssd/sssd.conf file, look up the [domain/ example.com ] section. Adjust the debug_level option to record more information in the logs. Restart the sssd service. Examine sssd_ example.com .log again. The file now includes more error messages. | [
"Jun 30 11:11:48 server1 krb5kdc[1279](info): AS_REQ (4 etypes {18 17 16 23}) 192.0.2.1: NEEDED_PREAUTH: admin EXAMPLE COM for krbtgt/EXAMPLE COM EXAMPLE COM, Additional pre-authentication required Jun 30 11:11:48 server1 krb5kdc[1279](info): AS_REQ (4 etypes {18 17 16 23}) 192.0.2.1: ISSUE: authtime 1309425108, etypes {rep=18 tkt=18 ses=18}, admin EXAMPLE COM for krbtgt/EXAMPLE COM EXAMPLE COM Jun 30 11:11:49 server1 krb5kdc[1279](info): TGS_REQ (4 etypes {18 17 16 23}) 192.0.2.1: UNKNOWN_SERVER: authtime 0, admin EXAMPLE COM for HTTP/[email protected], Server not found in Kerberos database",
"debug_level = 9",
"systemctl start sssd"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/trouble-client |
Chapter 7. Using CPU Manager and Topology Manager | Chapter 7. Using CPU Manager and Topology Manager CPU Manager manages groups of CPUs and constrains workloads to specific CPUs. CPU Manager is useful for workloads that have some of these attributes: Require as much CPU time as possible. Are sensitive to processor cache misses. Are low-latency network applications. Coordinate with other processes and benefit from sharing a single processor cache. Topology Manager collects hints from the CPU Manager, Device Manager, and other Hint Providers to align pod resources, such as CPU, SR-IOV VFs, and other device resources, for all Quality of Service (QoS) classes on the same non-uniform memory access (NUMA) node. Topology Manager uses topology information from the collected hints to decide if a pod can be accepted or rejected on a node, based on the configured Topology Manager policy and pod resources requested. Topology Manager is useful for workloads that use hardware accelerators to support latency-critical execution and high throughput parallel computation. To use Topology Manager you must configure CPU Manager with the static policy. 7.1. Setting up CPU Manager Procedure Optional: Label a node: # oc label node perf-node.example.com cpumanager=true Edit the MachineConfigPool of the nodes where CPU Manager should be enabled. In this example, all workers have CPU Manager enabled: # oc edit machineconfigpool worker Add a label to the worker machine config pool: metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled Create a KubeletConfig , cpumanager-kubeletconfig.yaml , custom resource (CR). Refer to the label created in the step to have the correct nodes updated with the new kubelet config. See the machineConfigPoolSelector section: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 Specify a policy: none . This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically. This is the default policy. static . This policy allows containers in guaranteed pods with integer CPU requests. It also limits access to exclusive CPUs on the node. If static , you must use a lowercase s . 2 Optional. Specify the CPU Manager reconcile frequency. The default is 5s . Create the dynamic kubelet config: # oc create -f cpumanager-kubeletconfig.yaml This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed. Check for the merged kubelet config: # oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7 Example output "ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "KubeletConfig", "name": "cpumanager-enabled", "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878" } ] Check the worker for the updated kubelet.conf : # oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager Example output cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 cpuManagerPolicy is defined when you create the KubeletConfig CR. 2 cpuManagerReconcilePeriod is defined when you create the KubeletConfig CR. Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod: # cat cpumanager-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: "1G" limits: cpu: 1 memory: "1G" nodeSelector: cpumanager: "true" Create the pod: # oc create -f cpumanager-pod.yaml Verify that the pod is scheduled to the node that you labeled: # oc describe pod cpumanager Example output Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx ... Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G ... QoS Class: Guaranteed Node-Selectors: cpumanager=true Verify that the cgroups are set up correctly. Get the process ID (PID) of the pause process: # ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause Pods of quality of service (QoS) tier Guaranteed are placed within the kubepods.slice . Pods of other QoS tiers end up in child cgroups of kubepods : # cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope # for i in `ls cpuset.cpus tasks` ; do echo -n "USDi "; cat USDi ; done Example output cpuset.cpus 1 tasks 32706 Check the allowed CPU list for the task: # grep ^Cpus_allowed_list /proc/32706/status Example output Cpus_allowed_list: 1 Verify that another pod (in this case, the pod in the burstable QoS tier) on the system cannot run on the core allocated for the Guaranteed pod: # cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 # oc describe node perf-node.example.com Example output ... Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%) This VM has two CPU cores. The system-reserved setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the Node Allocatable amount. You can see that Allocatable CPU is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled: NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s 7.2. Topology Manager policies Topology Manager aligns Pod resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod resources. Topology Manager supports four allocation policies, which you assign in the KubeletConfig custom resource (CR) named cpumanager-enabled : none policy This is the default policy and does not perform any topology alignment. best-effort policy For each container in a pod with the best-effort topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node. restricted policy For each container in a pod with the restricted topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager rejects this pod from the node, resulting in a pod in a Terminated state with a pod admission failure. single-numa-node policy For each container in a pod with the single-numa-node topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure. 7.3. Setting up Topology Manager To use Topology Manager, you must configure an allocation policy in the KubeletConfig custom resource (CR) named cpumanager-enabled . This file might exist if you have set up CPU Manager. If the file does not exist, you can create the file. Prerequisites Configure the CPU Manager policy to be static . Procedure To activate Topology Manager: Configure the Topology Manager allocation policy in the custom resource. USD oc edit KubeletConfig cpumanager-enabled apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2 1 This parameter must be static with a lowercase s . 2 Specify your selected Topology Manager allocation policy. Here, the policy is single-numa-node . Acceptable values are: default , best-effort , restricted , single-numa-node . 7.4. Pod interactions with Topology Manager policies The example Pod specs below help illustrate pod interactions with Topology Manager. The following pod runs in the BestEffort QoS class because no resource requests or limits are specified. spec: containers: - name: nginx image: nginx The pod runs in the Burstable QoS class because requests are less than limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" requests: memory: "100Mi" If the selected policy is anything other than none , Topology Manager would not consider either of these Pod specifications. The last example pod below runs in the Guaranteed QoS class because requests are equal to limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" cpu: "2" example.com/device: "1" requests: memory: "200Mi" cpu: "2" example.com/device: "1" Topology Manager would consider this pod. The Topology Manager would consult the hint providers, which are CPU Manager and Device Manager, to get topology hints for the pod. Topology Manager will use this information to store the best topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage. | [
"oc label node perf-node.example.com cpumanager=true",
"oc edit machineconfigpool worker",
"metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"oc create -f cpumanager-kubeletconfig.yaml",
"oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7",
"\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]",
"oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager",
"cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"cat cpumanager-pod.yaml",
"apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"",
"oc create -f cpumanager-pod.yaml",
"oc describe pod cpumanager",
"Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true",
"├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause",
"cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done",
"cpuset.cpus 1 tasks 32706",
"grep ^Cpus_allowed_list /proc/32706/status",
"Cpus_allowed_list: 1",
"cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com",
"Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)",
"NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s",
"oc edit KubeletConfig cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2",
"spec: containers: - name: nginx image: nginx",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/scalability_and_performance/using-cpu-manager |
5.6.2. RAID-Based Storage | 5.6.2. RAID-Based Storage One skill that a system administrator should cultivate is the ability to look at complex system configurations, and observe the different shortcomings inherent in each configuration. While this might, at first glance, seem to be a rather depressing viewpoint to take, it can be a great way to look beyond the shiny new boxes and visualize some future Saturday night with all production down due to a failure that could easily have been avoided with a bit of forethought. With this in mind, let us use what we now know about disk-based storage and see if we can determine the ways that disk drives can cause problems. First, consider an outright hardware failure: A disk drive with four partitions on it dies completely: what happens to the data on those partitions? It is immediately unavailable (at least until the failing unit can be replaced, and the data restored from a recent backup). A disk drive with a single partition on it is operating at the limits of its design due to massive I/O loads: what happens to applications that require access to the data on that partition? The applications slow down because the disk drive cannot process reads and writes any faster. You have a large data file that is slowly growing in size; soon it will be larger than the largest disk drive available for your system. What happens then? The disk drive fills up, the data file stops growing, and its associated applications stop running. Just one of these problems could cripple a data center, yet system administrators must face these kinds of issues every day. What can be done? Fortunately, there is one technology that can address each one of these issues. The name for that technology is RAID . 5.6.2.1. Basic Concepts RAID is an acronym standing for Redundant Array of Independent Disks [21] . As the name implies, RAID is a way for multiple disk drives to act as if they were a single disk drive. RAID techniques were first developed by researchers at the University of California, Berkeley in the mid-1980s. At the time, there was a large gap in price between the high-performance disk drives used on the large computer installations of the day, and the smaller, slower disk drives used by the still-young personal computer industry. RAID was viewed as a method of having several less expensive disk drives fill in for one higher-priced unit. More importantly, RAID arrays can be constructed in different ways, resulting in different characteristics depending on the final configuration. Let us look at the different configurations (known as RAID levels ) in more detail. 5.6.2.1.1. RAID Levels The Berkeley researchers originally defined five different RAID levels and numbered them "1" through "5." In time, additional RAID levels were defined by other researchers and members of the storage industry. Not all RAID levels were equally useful; some were of interest only for research purposes, and others could not be economically implemented. In the end, there were three RAID levels that ended up seeing widespread usage: Level 0 Level 1 Level 5 The following sections discuss each of these levels in more detail. 5.6.2.1.1.1. RAID 0 The disk configuration known as RAID level 0 is a bit misleading, as this is the only RAID level that employs absolutely no redundancy. However, even though RAID 0 has no advantages from a reliability standpoint, it does have other benefits. A RAID 0 array consists of two or more disk drives. The available storage capacity on each drive is divided into chunks , which represent some multiple of the drives' native block size. Data written to the array is be written, chunk by chunk, to each drive in the array. The chunks can be thought of as forming stripes across each drive in the array; hence the other term for RAID 0: striping . For example, with a two-drive array and a 4KB chunk size, writing 12KB of data to the array would result in the data being written in three 4KB chunks to the following drives: The first 4KB would be written to the first drive, into the first chunk The second 4KB would be written to the second drive, into the first chunk The last 4KB would be written to the first drive, into the second chunk Compared to a single disk drive, the advantages to RAID 0 include: Larger total size -- RAID 0 arrays can be constructed that are larger than a single disk drive, making it easier to store larger data files Better read/write performance -- The I/O load on a RAID 0 array is spread evenly among all the drives in the array (Assuming all the I/O is not concentrated on a single chunk) No wasted space -- All available storage on all drives in the array are available for data storage Compared to a single disk drive, RAID 0 has the following disadvantage: Less reliability -- Every drive in a RAID 0 array must be operative for the array to be available; a single drive failure in an N -drive RAID 0 array results in the removal of 1/ N th of all the data, rendering the array useless Note If you have trouble keeping the different RAID levels straight, just remember that RAID 0 has zero percent redundancy. 5.6.2.1.1.2. RAID 1 RAID 1 uses two (although some implementations support more) identical disk drives. All data is written to both drives, making them mirror images of each other. That is why RAID 1 is often known as mirroring . Whenever data is written to a RAID 1 array, two physical writes must take place: one to the first drive, and one to the second drive. Reading data, on the other hand, only needs to take place once and either drive in the array can be used. Compared to a single disk drive, a RAID 1 array has the following advantages: Improved redundancy -- Even if one drive in the array were to fail, the data would still be accessible Improved read performance -- With both drives operational, reads can be evenly split between them, reducing per-drive I/O loads When compared to a single disk drive, a RAID 1 array has some disadvantages: Maximum array size is limited to the largest single drive available. Reduced write performance -- Because both drives must be kept up-to-date, all write I/Os must be performed by both drives, slowing the overall process of writing data to the array Reduced cost efficiency -- With one entire drive dedicated to redundancy, the cost of a RAID 1 array is at least double that of a single drive Note If you have trouble keeping the different RAID levels straight, just remember that RAID 1 has one hundred percent redundancy. 5.6.2.1.1.3. RAID 5 RAID 5 attempts to combine the benefits of RAID 0 and RAID 1, while minimizing their respective disadvantages. Like RAID 0, a RAID 5 array consists of multiple disk drives, each divided into chunks. This allows a RAID 5 array to be larger than any single drive. Like a RAID 1 array, a RAID 5 array uses some disk space in a redundant fashion, improving reliability. However, the way RAID 5 works is unlike either RAID 0 or 1. A RAID 5 array must consist of at least three identically-sized disk drives (although more drives may be used). Each drive is divided into chunks and data is written to the chunks in order. However, not every chunk is dedicated to data storage as it is in RAID 0. Instead, in an array with n disk drives in it, every n th chunk is dedicated to parity . Chunks containing parity make it possible to recover data should one of the drives in the array fail. The parity in chunk x is calculated by mathematically combining the data from each chunk x stored on all the other drives in the array. If the data in a chunk is updated, the corresponding parity chunk must be recalculated and updated as well. This also means that every time data is written to the array, at least two drives are written to: the drive holding the data, and the drive containing the parity chunk. One key point to keep in mind is that the parity chunks are not concentrated on any one drive in the array. Instead, they are spread evenly across all the drives. Even though dedicating a specific drive to contain nothing but parity is possible (in fact, this configuration is known as RAID level 4), the constant updating of parity as data is written to the array would mean that the parity drive could become a performance bottleneck. By spreading the parity information evenly throughout the array, this impact is reduced. However, it is important to keep in mind the impact of parity on the overall storage capacity of the array. Even though the parity information is spread evenly across all the drives in the array, the amount of available storage is reduced by the size of one drive. Compared to a single drive, a RAID 5 array has the following advantages: Improved redundancy -- If one drive in the array fails, the parity information can be used to reconstruct the missing data chunks, all while keeping the array available for use [22] Improved read performance -- Due to the RAID 0-like way data is divided between drives in the array, read I/O activity is spread evenly between all the drives Reasonably good cost efficiency -- For a RAID 5 array of n drives, only 1/ n th of the total available storage is dedicated to redundancy Compared to a single drive, a RAID 5 array has the following disadvantage: Reduced write performance -- Because each write to the array results in at least two writes to the physical drives (one write for the data and one for the parity), write performance is worse than a single drive [23] 5.6.2.1.1.4. Nested RAID Levels As should be obvious from the discussion of the various RAID levels, each level has specific strengths and weaknesses. It was not long after RAID-based storage began to be deployed that people began to wonder whether different RAID levels could somehow be combined, producing arrays with all of the strengths and none of the weaknesses of the original levels. For example, what if the disk drives in a RAID 0 array were themselves actually RAID 1 arrays? This would give the advantages of RAID 0's speed, with the reliability of RAID 1. This is just the kind of thing that can be done. Here are the most commonly-nested RAID levels: RAID 1+0 RAID 5+0 RAID 5+1 Because nested RAID is used in more specialized environments, we will not go into greater detail here. However, there are two points to keep in mind when thinking about nested RAID: Order matters -- The order in which RAID levels are nested can have a large impact on reliability. In other words, RAID 1+0 and RAID 0+1 are not the same. Costs can be high -- If there is any disadvantage common to all nested RAID implementations, it is one of cost; for example, the smallest possible RAID 5+1 array consists of six disk drives (and even more drives are required for larger arrays). Now that we have explored the concepts behind RAID, let us see how RAID can be implemented. 5.6.2.1.2. RAID Implementations It is obvious from the sections that RAID requires additional "intelligence" over and above the usual disk I/O processing for individual drives. At the very least, the following tasks must be performed: Dividing incoming I/O requests to the individual disks in the array For RAID 5, calculating parity and writing it to the appropriate drive in the array Monitoring the individual disks in the array and taking the appropriate action should one fail Controlling the rebuilding of an individual disk in the array, when that disk has been replaced or repaired Providing a means to allow administrators to maintain the array (removing and adding drives, initiating and halting rebuilds, etc.) There are two major methods that may be used to accomplish these tasks. The two sections describe them in more detail. 5.6.2.1.2.1. Hardware RAID A hardware RAID implementation usually takes the form of a specialized disk controller card. The card performs all RAID-related functions and directly controls the individual drives in the arrays attached to it. With the proper driver, the arrays managed by a hardware RAID card appear to the host operating system just as if they were regular disk drives. Most RAID controller cards work with SCSI drives, although there are some ATA-based RAID controllers as well. In any case, the administrative interface is usually implemented in one of three ways: Specialized utility programs that run as applications under the host operating system, presenting a software interface to the controller card An on-board interface using a serial port that is accessed using a terminal emulator A BIOS-like interface that is only accessible during the system's power-up testing Some RAID controllers have more than one type of administrative interface available. For obvious reasons, a software interface provides the most flexibility, as it allows administrative functions while the operating system is running. However, if you are booting an operating system from a RAID controller, an interface that does not require a running operating system is a requirement. Because there are so many different RAID controller cards on the market, it is impossible to go into further detail here. The best course of action is to read the manufacturer's documentation for more information. 5.6.2.1.2.2. Software RAID Software RAID is RAID implemented as kernel- or driver-level software for a particular operating system. As such, it provides more flexibility in terms of hardware support -- as long as the hardware is supported by the operating system, RAID arrays can be configured and deployed. This can dramatically reduce the cost of deploying RAID by eliminating the need for expensive, specialized RAID hardware. Often the excess CPU power available for software RAID parity calculations greatly exceeds the processing power present on a RAID controller card. Therefore, some software RAID implementations actually have the capability for higher performance than hardware RAID implementations. However, software RAID does have limitations not present in hardware RAID. The most important one to consider is support for booting from a software RAID array. In most cases, only RAID 1 arrays can be used for booting, as the computer's BIOS is not RAID-aware. Since a single drive from a RAID 1 array is indistinguishable from a non-RAID boot device, the BIOS can successfully start the boot process; the operating system can then change over to software RAID operation once it has gained control of the system. [21] When early RAID research began, the acronym stood for Redundant Array of Inexpensive Disks, but over time the "standalone" disks that RAID was intended to supplant became cheaper and cheaper, rendering the price comparison meaningless. [22] I/O performance is reduced while operating with one drive unavailable, due to the overhead involved in reconstructing the missing data. [23] There is also an impact from the parity calculations required for each write. However, depending on the specific RAID 5 implementation (specifically, where in the system the parity calculations are performed), this impact can range from sizable to nearly nonexistent. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-storage-adv-raid |
3.2.2. CPUfreq Setup | 3.2.2. CPUfreq Setup Before selecting and configuring a CPUfreq governor, you need to add the appropriate CPUfreq driver first. Procedure 3.1. How to Add a CPUfreq Driver Use the following command to view which CPUfreq drivers are available for your system: Use modprobe to add the appropriate CPUfreq driver. When using the above command, be sure to remove the .ko filename suffix. Important When choosing an appropriate CPUfreq driver, always choose acpi-cpufreq over p4-clockmod . While using the p4-clockmod driver reduces the clock frequency of a CPU, it does not reduce the voltage. acpi-cpufreq , on the other hand, reduces voltage along with CPU clock frequency, allowing less power consumption and heat output for each unit reduction in performance. You can also view which governors are available for use for a specific CPU using: Some CPUfreq governors may not be available for you to use. In this case, use modprobe to add the necessary kernel modules that enable the specific CPUfreq governor you wish to use. These kernel modules are available in /lib/modules/ [kernel version] /kernel/drivers/cpufreq/ . Procedure 3.2. Enabling a CPUfreq Governor If a specific governor is not listed as available for your CPU, use modprobe to enable the governor you wish to use: Example 3.1. Enabling a Governor If the ondemand governor is not available for your CPU, use the following command: Once a governor is listed as available for your CPU, you can enable it using: | [
"ls /lib/modules/ [kernel version] /kernel/arch/ [architecture] /kernel/cpu/cpufreq/",
"modprobe [CPUfreq driver]",
"cpupower frequency-info --governors",
"modprobe [governor]",
"modprobe cpufreq_ondemand",
"cpupower frequency-set --governor [governor]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/power_management_guide/cpufreq_setup |
Chapter 9. Creating SELinux policies for containers | Chapter 9. Creating SELinux policies for containers RHEL 9 provides a tool for generating SELinux policies for containers using the udica package. With udica , you can create a tailored security policy for better control of how a container accesses host system resources, such as storage, devices, and network. This enables you to harden your container deployments against security violations and it also simplifies achieving and maintaining regulatory compliance. 9.1. Introduction to the udica SELinux policy generator To simplify creating new SELinux policies for custom containers, RHEL 9 provides the udica utility. You can use this tool to create a policy based on an inspection of the container JavaScript Object Notation (JSON) file, which contains Linux-capabilities, mount-points, and ports definitions. The tool consequently combines rules generated using the results of the inspection with rules inherited from a specified SELinux Common Intermediate Language (CIL) block. The process of generating SELinux policy for a container using udica has three main parts: Parsing the container spec file in the JSON format Finding suitable allow rules based on the results of the first part Generating final SELinux policy During the parsing phase, udica looks for Linux capabilities, network ports, and mount points. Based on the results, udica detects which Linux capabilities are required by the container and creates an SELinux rule allowing all these capabilities. If the container binds to a specific port, udica uses SELinux user-space libraries to get the correct SELinux label of a port that is used by the inspected container. Afterward, udica detects which directories are mounted to the container file-system name space from the host. The CIL's block inheritance feature allows udica to create templates of SELinux allow rules focusing on a specific action, for example: allow accessing home directories allow accessing log files allow accessing communication with Xserver . These templates are called blocks and the final SELinux policy is created by merging the blocks. Additional resources Generate SELinux policies for containers with udica Red Hat Blog article 9.2. Creating and using an SELinux policy for a custom container With the udica utility, you can generate an SELinux security policy for a custom container. Prerequisites The podman tool for managing containers is installed. If it is not, use the dnf install podman command. A custom Linux container - ubi8 in this example. Procedure Install the udica package: Alternatively, install the container-tools module, which provides a set of container software packages, including udica : Start the ubi8 container that mounts the /home directory with read-only permissions and the /var/spool directory with permissions to read and write. The container exposes the port 21 . Note that now the container runs with the container_t SELinux type. This type is a generic domain for all containers in the SELinux policy and it might be either too strict or too loose for your scenario. Open a new terminal, and enter the podman ps command to obtain the ID of the container: Create a container JSON file, and use udica for creating a policy module based on the information in the JSON file: Alternatively: As suggested by the output of udica in the step, load the policy module: Stop the container and start it again with the --security-opt label=type:my_container.process option: Verification Check that the container runs with the my_container.process type: Verify that SELinux now allows access the /home and /var/spool mount points: Check that SELinux allows binding only to the port 21: Additional resources udica(8) and podman(1) man pages on your system udica - Generate SELinux policies for containers (Github.com) Building, running, and managing containers | [
"dnf install -y udica",
"dnf module install -y container-tools",
"podman run --env container=podman -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it ubi8 bash",
"podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 37a3635afb8f registry.access.redhat.com/ubi8:latest bash 15 minutes ago Up 15 minutes ago heuristic_lewin",
"podman inspect 37a3635afb8f > container.json udica -j container.json my_container Policy my_container with container id 37a3635afb8f created! [...]",
"podman inspect 37a3635afb8f | udica my_container Policy my_container with container id 37a3635afb8f created! Please load these modules using: semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil} Restart the container with: \"--security-opt label=type:my_container.process\" parameter",
"semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}",
"podman stop 37a3635afb8f podman run --security-opt label=type: my_container .process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it ubi8 bash",
"ps -efZ | grep my_container .process unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 root 2275 434 1 13:49 pts/1 00:00:00 podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it ubi8 bash system_u:system_r:my_container.process:s0:c270,c963 root 2317 2305 0 13:49 pts/0 00:00:00 bash",
"cd /home ls username cd /var/spool/ touch test",
"dnf install nmap-ncat nc -lvp 21 ... Ncat: Listening on :::21 Ncat: Listening on 0.0.0.0:21 ^C nc -lvp 80 ... Ncat: bind to :::80: Permission denied. QUITTING."
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_selinux/creating-selinux-policies-for-containers_using-selinux |
Chapter 17. Integrating with email | Chapter 17. Integrating with email With Red Hat Advanced Cluster Security for Kubernetes (RHACS), you can configure your existing email provider to send notifications about policy violations. If you are using Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service), you can use your existing email provider or the built-in email notifier to send email notifications. You can use the Default recipient field to forward alerts from RHACS and the RHACS Cloud Service to an email address. Otherwise, you can use annotations to define an audience and notify them about policy violations associated with a specific deployment or namespace. 17.1. Integrating with email on RHACS You can use email as a notification method by forwarding alerts from RHACS. 17.1.1. Configuring the email plugin The RHACS notifier can send email to a recipient specified in the integration, or it can use annotations to determine the recipient. Important If you are using RHACS Cloud Service, it blocks port 25 by default. Configure your mail server to use port 587 or 465 to send email notifications. Procedure Go to Platform Configuration Integrations . Under the Notifier Integrations section, select Email . Select New Integration . In the Integration name field, enter a name for your email integration. In the Email server field, enter the address of your email server. The email server address includes fully qualified domain name (FQDN) and the port number; for example, smtp.example.com:465 . Optional: If you are using unauthenticated SMTP, select Enable unauthenticated SMTP . This is insecure and not recommended, but might be required for some integrations. For example, you might need to enable this option if you use an internal server for notifications that does not require authentication. Note You cannot change an existing email integration that uses authentication to enable unauthenticated SMTP. You must delete the existing integration and create a new one with Enable unauthenticated SMTP selected. Enter the user name and password of a service account that is used for authentication. Optional: Enter the name that you want to appear in the FROM header of email notifications in the From field; for example, Security Alerts . Specify the email address that you want to appear in the SENDER header of email notifications in the Sender field. Specify the email address that will receive the notifications in the Default recipient field. Optional: Enter an annotation key in Annotation key for recipient . You can use annotations to dynamically determine an email recipient. To do this: Add an annotation similar to the following example in your namespace or deployment YAML file, where email is the Annotation key that you specify in your email integration. You can create an annotation for the deployment or the namespace. Use the annotation key email in the Annotation key for recipient field. If you configured the deployment or namespace with an annotation, the RHACS sends the alert to the email specified in the annotation. Otherwise, it sends the alert to the default recipient. Note The following rules govern how RHACS determines the recipient of an email notification: If the deployment has an annotation key, the annotation's value overrides the default value. If the namespace has an annotation key, the namespace's value overrides the default value. If a deployment has an annotation key and a defined audience, RHACS sends an email to the audience specified in the key. If a deployment does not have an annotation key, RHACS checks the namespace for an annotation key and sends an email to the specified audience. If no annotation keys exist, RHACS sends an email to the default recipient. Optional: Select Disable TLS certificate validation (insecure) to send email without TLS. You should not disable TLS unless you are using StartTLS. Note Use TLS for email notifications. Without TLS, all email is sent unencrypted. Optional: To use StartTLS, select either Login or Plain from the Use STARTTLS (requires TLS to be disabled) drop-down menu. Important With StartTLS, credentials are passed in plain text to the email server before the session encryption is established. StartTLS with the Login parameter sends authentication credentials in a base64 encoded string. StartTLS with the Plain parameter sends authentication credentials to your mail relay in plain text. Additional resources Configuring delivery destinations and scheduling 17.1.2. Configuring policy notifications Enable alert notifications for system policies. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Select one or more policies for which you want to send alerts. Under Bulk actions , select Enable notification . In the Enable notification window, select the Email notifier. Note If you have not configured any other integrations, the system displays a message that no notifiers are configured. Click Enable . Note Red Hat Advanced Cluster Security for Kubernetes sends notifications on an opt-in basis. To receive notifications, you must first assign a notifier to the policy. Notifications are only sent once for a given alert. If you have assigned a notifier to a policy, you will not receive a notification unless a violation generates a new alert. Red Hat Advanced Cluster Security for Kubernetes creates a new alert for the following scenarios: A policy violation occurs for the first time in a deployment. A runtime-phase policy violation occurs in a deployment after you resolved the runtime alert for a policy in that deployment. 17.2. Integrating with email on RHACS Cloud Service You can use your existing email provider or the built-in email notifier in RHACS Cloud Service to send email alerts about policy violations. To use your own email provider, you must configure the email provider as described in the section Configuring the email plugin . To use the built-in email notifier, you must configure the RHACS Cloud Service email plugin. 17.2.1. Configuring the RHACS Cloud Service email plugin The RHACS Cloud Service notifier sends an email to a recipient. You can specify the recipient in the integration, or RHACS Cloud Service can use annotation keys to find the recipient. Important You can only send 250 emails per 24-hour rolling period. If you exceed this limit, RHACS Cloud Service sends emails only after the 24-hour period ends. Because of rate limits, Red Hat recommends using email notifications only for critical alerts or vulnerability reports. Procedure Go to Platform Configuration Integrations . Under the Notifier Integrations section, select RHACS Cloud Service Email . Select New Integration . In the Integration name field, enter a name for your email integration. Specify the email address to which you want to send the email notifications in the Default recipient field. Optional: Enter an annotation key in Annotation key for recipient . You can use annotations to dynamically determine an email recipient. To do this: Add an annotation similar to the following example in your namespace or deployment YAML file, where email is the Annotation key that you specify in your email integration. You can create an annotation for the deployment or the namespace. Use the annotation key email in the Annotation key for recipient field. If you configured the deployment or namespace with an annotation, the RHACS Cloud Service sends the alert to the email specified in the annotation. Otherwise, it sends the alert to the default recipient. Note The following rules govern how RHACS Cloud Service determines the recipient of an email notification: If the deployment has an annotation key, the annotation's value overrides the default value. If the namespace has an annotation key, the namespace's value overrides the default value. If a deployment has an annotation key and a defined audience, RHACS Cloud Service sends an email to the audience specified in the key. If a deployment does not have an annotation key, RHACS Cloud Service checks the namespace for an annotation key and sends an email to the specified audience. If no annotation keys exist, RHACS Cloud Service sends an email to the default recipient. Additional resources Configuring delivery destinations and scheduling 17.2.2. Configuring policy notifications Enable alert notifications for system policies. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Select one or more policies for which you want to send alerts. Under Bulk actions , select Enable notification . In the Enable notification window, select the RHACS Cloud Service Email notifier. Note If you have not configured any other integrations, the system displays a message that no notifiers are configured. Click Enable . Note Red Hat Advanced Cluster Security for Kubernetes sends notifications on an opt-in basis. To receive notifications, you must first assign a notifier to the policy. Notifications are only sent once for a given alert. If you have assigned a notifier to a policy, you will not receive a notification unless a violation generates a new alert. Red Hat Advanced Cluster Security for Kubernetes creates a new alert for the following scenarios: A policy violation occurs for the first time in a deployment. A runtime-phase policy violation occurs in a deployment after you resolved the runtime alert for a policy in that deployment. | [
"annotations: email: <email_address>",
"annotations: email: <email_address>"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/integrating/integrate-using-email |
Chapter 11. Networking | Chapter 11. Networking logrotate now correctly works with wpa_supplicant Previously, wpa_supplicant did not correctly truncate the log file when the logrotate script attempted to rotate it. This bug has been fixed and logrotate now correctly coordinates log rotation with wpa_supplicant . (BZ#908306) Bug fixes in system-config-network This release brings multiple bug fixes to the Network Configuration tool ( system-config-network ). Notable fixes include: Previously, when system-config-network was used to change the system host name, the new host name was appended to the /etc/hosts file every time, even if the same host name was previously used. This could cause the /etc/hosts file to be unnecessarily cluttered. With this update, new host names are only appended if they have not been used previously. A bug preventing suppression of DNS settings has been fixed and you can now suppress DNS settings by leaving the DNS field empty. In some circumstances, system-config-network could display text messages in the text-based interface before the text framework was properly cleaned, resulting in the message being distorted. This bug has been fixed and text messages from this tool now display correctly. (BZ#1086282) NetworkManager no longer brings down connections when saving a configuration file in vim Previously, editing network connection configuration files in editors which save files by deleting and recreating them (such as vim ) caused NetworkManager to bring down the edited connection if it was active at the time. This bug has been fixed and active connections can now be safely edited in any text editor. (BZ#1272617) Bond devices not created by NetworkManager now work correctly Previously, bond devices named bond0 , which created when the bonding module was loaded and not by NetworkManager , were incorrectly configured if the network service was disabled. This bug has been fixed and bond devices now work correctly with NetworkManager . (BZ#1292502) NetworkManager no longer ignores the DHCP-provided list of search domains Previously, NetworkManager used the host's DNS domain suffix to configure the DNS resolver ( /etc/resolv.conf ), and ignored the list of search domain supplied by DHCP. This bug has been fixed and NetworkManager now correctly configures the DNS resolver using DHCP. (BZ#1202539) NetworkManager can now distinguish between software and hardware devices with the same hadware address Previously, NetworkManager ignored connections for software devices such as bonds and bridges if the underlying hardware devices used the same hardware address (the HWADDR key) and used the NM_CONTROLLED=no setting. This bug has been fixed and NetworkManager now works with such devices correctly. (BZ#902907) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_technical_notes/bug_fixes_networking |
13.10. Hot Rod C++ Client | 13.10. Hot Rod C++ Client The Hot Rod C++ client is a new addition to the Hot Rod client family which includes the Hot Rod Java client. It enables C++ runtime applications to connect and interact with Red Hat JBoss Data Grid remote servers. The Hot Rod C++ client allows applications developed in C++ to read or write data to remote caches. The Hot Rod C++ client supports all three levels of client intelligence and is supported on the following platforms: Red Hat Enterprise Linux 5, 64-bit Red Hat Enterprise Linux 6, 64-bit Red Hat Enterprise Linux 7, 64-bit The Hot Rod C++ client is available as a Technology Preview on 64-bit Windows with Visual Studio 2010. Report a bug 13.10.1. Hot Rod C++ Client Formats The Hot Rod C++ client is available in the following two library formats: Static library Shared/Dynamic library Static Library The static library is statically linked to an application. This increases the size of the final executable. The application is self-contained and it does not need to ship a separate library. Shared/Dynamic Library Shared/Dynamic libraries are dynamically linked to an application at runtime. The library is stored in a separate file and can be upgraded separately from the application, without recompiling the application. Note This can only happen if the library's major version is equal to the one against which the application was linked at compile time, indicating that it is binary compatible. Report a bug 13.10.2. Hot Rod C++ Client Prerequisites In order to use the Hot Rod C++ Client, the following are needed: C++ 03 compiler with support for shared_ptr TR1 (GCC 4.0+, Visual Studio C++ 2010). Red Hat JBoss Data Grid Server 6.1.0 or higher version. Report a bug 13.10.3. Hot Rod C++ Client Download The Hot Rod C++ client is included in a separate zip file jboss-datagrid-<version>-hotrod-cpp-client-<platform>.zip under Red Hat JBoss Data Grid binaries on the Red Hat Customer Portal at https://access.redhat.com . Download the appropriate Hot Rod C++ client which applies to your operating system. Report a bug 13.10.4. Hot Rod C++ Client Configuration The Hot Rod C++ client interacts with a remote Hot Rod server using the RemoteCache API. To initiate communication with a particular Hot Rod server, configure RemoteCache and choose the specific cache on the Hot Rod server. Use the ConfigurationBuilder API to configure: The initial set of servers to connect to. Connection pooling attributes. Connection/Socket timeouts and TCP nodelay. Hot Rod protocol version. Sample C++ main executable file configuration The following example shows how to use the ConfigurationBuilder to configure a RemoteCacheManager and how to obtain the default remote cache: Example 13.6. SimpleMain.cpp Report a bug 13.10.5. Hot Rod C++ Client API The RemoteCacheManager is a starting point to obtain a reference to a RemoteCache. The RemoteCache API can interact with a remote Hot Rod server and the specific cache on that server. Using the RemoteCache reference obtained in the example, it is possible to put, get, replace and remove values in a remote cache. It is also possible to perform bulk operations, such as retrieving all of the keys, and clearing the cache. When a RemoteCacheManager is stopped, all resources in use are released. Example 13.7. SimpleMain.cpp Report a bug | [
"#include \"infinispan/hotrod/ConfigurationBuilder.h\" #include \"infinispan/hotrod/RemoteCacheManager.h\" #include \"infinispan/hotrod/RemoteCache.h\" #include <stdlib.h> using namespace infinispan::hotrod; int main(int argc, char** argv) { ConfigurationBuilder b; b.addServer().host(\"127.0.0.1\").port(11222); RemoteCacheManager cm(builder.build()); RemoteCache<std::string, std::string> cache = cm.getCache<std::string, std::string>(); return 0; }",
"RemoteCache<std::string, std::string> rc = cm.getCache<std::string, std::string>(); std::string k1(\"key13\"); std::string v1(\"boron\"); // put rc.put(k1, v1); std::auto_ptr<std::string> rv(rc.get(k1)); rc.putIfAbsent(k1, v1); std::auto_ptr<std::string> rv2(rc.get(k1)); std::map<HR_SHARED_PTR<std::string>,HR_SHARED_PTR<std::string> > map = rc.getBulk(0); std::cout << \"getBulk size\" << map.size() << std::endl; .. . cm.stop();"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-hot_rod_c_client1 |
Chapter 51. module | Chapter 51. module This chapter describes the commands under the module command. 51.1. module list List module versions Usage: Table 51.1. Command arguments Value Summary -h, --help Show this help message and exit --all Show all modules that have version information Table 51.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 51.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 51.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 51.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack module list [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/module |
7.13. bind-dyndb-ldap | 7.13. bind-dyndb-ldap 7.13.1. RHBA-2013:0359 - bind-dyndb-ldap bug fix and enhancement update Updated bind-dyndb-ldap packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The dynamic LDAP back end is a plug-in for BIND that provides back-end capabilities to LDAP databases. It features support for dynamic updates and internal caching that help to reduce the load on LDAP servers. Note The bind-dyndb-ldap package has been upgraded to upstream version 2.3, which provides a number of bug fixes and enhancements over the version. In particular, many persistent search improvements. Refer to /usr/share/doc/bind-dyndb-ldap/NEWS for a detailed list of the changes. (BZ#827414) Bug Fixes BZ# 767496 When persistent search was in use, the plug-in sometimes terminated unexpectedly due to an assertion failure when the "rndc reload" command was issued and the LDAP server was not reachable. With this update, the code has been improved so that connection failures and reconnects are now handled more robustly. As a result, the plug-in no longer crashes in the scenario described. BZ# 829388 Previously, some relative domain names were not expanded correctly to FQDNs. Consequently, zone transfers sometimes contained relative domain names although they should only contain FQDNs (for example, they contained "name." record instead of "name.example.com."). The plug-in has been patched, and as a result, zone transfers now contain the correct domain names. BZ# 840381 Due to a bug in bind-dyndb-ldap, the named process sometimes terminated unexpectedly when a connection to LDAP timed out. Consequently, when a connection to LDAP timed out (or failed), the named process was sometimes aborted and DNS service was unavailable. The plug-in has been fixed and as a result, the plug-in now handles situations when a connection to LDAP fails gracefully. BZ# 856269 Due to a race condition, the plug-in sometimes caused the named process to terminate unexpectedly when it received a request to reload. Consequently, the DNS service was sometimes unavailable. A patch has been applied and as a result, the race condition during reload no longer occurs. Enhancements BZ# 733711 LDAP in Red Hat Enterprise Linux 6.4 includes support for persistent search for both zones and their resource records. Persistent search allows the bind-dyndb-ldap plug-in to be immediately informed about all changes in an LDAP database. It also decreases network bandwidth usage required by repeated polling. BZ# 829340 Previously, it was only possible to configure IPv4 forwarders in LDAP. With this update, a patch has been added to the plug-in, and as a result, the plug-in is now able to parse and use IPv6 forwarders. BIND9 syntax for "forwarders" is required. BZ# 829385 Previously, it was impossible to share one LDAP database between multiple master servers; only one master server could be used. A new bind-dyndb-ldap option "fake_mname" which allows for overriding the master server name in the SOA record has been added. With this option it is now possible to override the master server name in the SOA record so that multiple servers can act as master server for one LDAP database. BZ# 840383 When multiple named processes shared one LDAP database and dynamically updated DNS records (via DDNS), they did not update the SOA serial numbers so it was impossible to serve such zones on secondary servers correctly (that is to say, they were not updated on slave servers). With this update, the plug-in can now update SOA serial numbers automatically, if configured to do so. Refer to the new "serial_autoincrement" option in the /usr/share/doc/bind-dyndb-ldap/README file for more details. BZ# 869323 This update provides support for the per-zone disabling of forwarding. Some setups require the disabling of forwarding per-zone. For example, company servers are configured as authoritative for a non-public zone and have global forwarding turned on. When the non-public zone contains delegation for a non-public subdomain, the zone must have explicitly disabled forwarding otherwise the glue records will not be returned. As a result, a server can now return delegation glue records for private zones when global forwarding is turned on. Refer to /usr/share/doc/bind-dyndb-ldap/README for detailed information. Users of bind-dyndb-ldap are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. 7.13.2. RHBA-2013:0739 - bind-dyndb-ldap bug fix update Updated bind-dyndb-ldap packages that fix one bug are now available for Red Hat Enterprise Linux 6. The dynamic LDAP back-end is a plug-in for BIND that provides back-end capabilities to LDAP databases. It features support for dynamic updates and internal caching that helps to reduce the load on LDAP servers. Bug Fix BZ# 928429 The bind-dyndb-ldap plug-in processed settings too early, which led to the daemon terminating unexpectedly with a segmentation fault during startup or reload. The bind-dyndb-ldap plug-in has been fixed to process its options later, and so, no longer crashes during startup or reload. Users of bind-dyndb-ldap are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/bind-dyndb-ldap |
Chapter 20. Red Hat Enterprise Linux 7.5 for IBM Power LE (POWER9) | Chapter 20. Red Hat Enterprise Linux 7.5 for IBM Power LE (POWER9) Red Hat Enterprise Linux 7.5 for IBM Power LE (POWER9) introduces Red Hat Enterprise Linux 7.5 user space with an updated kernel, which is based on version 4.14 and is provided by the kernel-alt packages. The offering is distributed with other updated packages but most of it is the standard Red Hat Enterprise Linux 7 Server RPMs. Installation ISO images are available on the Customer Portal Downloads page . For information about Red Hat Enterprise Linux 7.5 installation and user space, see the Installation Guide and other Red Hat Enterprise Linux 7 documentation . For information regarding the version, refer to Red Hat Enterprise Linux 7.4 for IBM Power LE (POWER9) - Release Notes. Note Bare metal installations on IBM Power LE using a USB drive require you to specify the inst.stage2= boot option manually at the boot menu. See the Boot Options chapter in the Installation Guide for detailed information. 20.1. New Features and Updates Virtualization KVM virtualization is now supported on IBM POWER9 systems. However, due to hardware differences, certain features and functionalities differ from what is supported on AMD64 and Intel 64 systems. For details, see the Virtualization Deployment and Administration Guide . Platform Tools OProfile now includes support for the IBM POWER9 processor. Note that the PM_RUN_INST_CMPL OProfile performance monitoring event cannot be setup and should not be used in this version of OProfile . (BZ#1463290) This update adds support for the IBM POWER9 performance monitoring hardware events to papi . It includes basic PAPI presets for events, such as instructions ( PAPI_TOT_INS ) or processor cycles ( PAPI_TOT_CYC ). (BZ#1463291) This version of libpfm includes support for the IBM POWER9 performance monitoring hardware events. (BZ#1463292) SystemTap includes backported compatibility fixes necessary for the kernel. Previously, the memcpy() function from the GNU C Library ( glibc ) used unaligned vector load and store instructions on 64-bit IBM POWER systems. Consequently, when memcpy() was used to access device memory on POWER9 systems, performance would suffer. The memcpy() function has been enhanced to use aligned memory access instructions, to provide better performance for applications regardless of the memory involved on POWER9, without affecting the performance on generations of the POWER architecture. (BZ#1498925) Security USBGuard is now available as a Technology Preview on IBM Power LE (POWER9) The USBGuard software framework provides system protection against intrusive USB devices by implementing basic whitelisting and blacklisting capabilities based on device attributes. USBGuard is now available as a Technology Preview on IBM Power LE (POWER9). Note that USB is not supported on IBM Z, and the USBGuard framework cannot be provided on those systems. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/chap-Red_Hat_Enterprise_Linux-7.5_Release_Notes-RHEL_for_IBM_POWER9 |
8.5. Synchronous and Asynchronous Replication | 8.5. Synchronous and Asynchronous Replication Replication mode can be synchronous or asynchronous depending on the problem being addressed. Synchronous replication blocks a thread or caller (for example on a put() operation) until the modifications are replicated across all nodes in the cluster. By waiting for acknowledgments, synchronous replication ensures that all replications are successfully applied before the operation is concluded. Asynchronous replication operates significantly faster than synchronous replication because it does not need to wait for responses from nodes. Asynchronous replication performs the replication in the background and the call returns immediately. Errors that occur during asynchronous replication are written to a log. As a result, a transaction can be successfully completed despite the fact that replication of the transaction may not have succeeded on all the cache instances in the cluster. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 8.5.1. Troubleshooting Asynchronous Replication Behavior In some instances, a cache configured for asynchronous replication or distribution may wait for responses, which is synchronous behavior. This occurs because caches behave synchronously when both state transfers and asynchronous modes are configured. This synchronous behavior is a prerequisite for state transfer to operate as expected. Use one of the following to remedy this problem: Disable state transfer and use a ClusteredCacheLoader to lazily look up remote state as and when needed. Enable state transfer and REPL_SYNC . Use the Asynchronous API (for example, the cache.putAsync(k, v) ) to activate 'fire-and-forget' capabilities. Enable state transfer and REPL_ASYNC . All RPCs end up becoming synchronous, but client threads will not be held up if a replication queue is enabled (which is recommended for asynchronous mode). 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-synchronous_and_asynchronous_replication |
4.11. Checking Integrity with AIDE | 4.11. Checking Integrity with AIDE Advanced Intrusion Detection Environment ( AIDE ) is a utility that creates a database of files on the system, and then uses that database to ensure file integrity and detect system intrusions. 4.11.1. Installing AIDE To install the aide package, enter the following command as root : To generate an initial database, enter the following command as root : Note In the default configuration, the aide --init command checks just a set of directories and files defined in the /etc/aide.conf file. To include additional directories or files in the AIDE database, and to change their watched parameters, edit /etc/aide.conf accordingly. To start using the database, remove the .new substring from the initial database file name: To change the location of the AIDE database, edit the /etc/aide.conf file and modify the DBDIR value. For additional security, store the database, configuration, and the /usr/sbin/aide binary file in a secure location such as a read-only media. Important To avoid SELinux denials after the AIDE database location change, update your SELinux policy accordingly. See the SELinux User's and Administrator's Guide for more information. 4.11.2. Performing Integrity Checks To initiate a manual check, enter the following command as root : At a minimum, AIDE should be configured to run a weekly scan. At most, AIDE should be run daily. For example, to schedule a daily execution of AIDE at 4:05 am using cron (see the Automating System Tasks chapter in the System Administrator's Guide), add the following line to /etc/crontab : 4.11.3. Updating an AIDE Database After the changes of your system such as package updates or configuration files adjustments are verified, update your baseline AIDE database: The aide --update command creates the /var/lib/aide/aide.db.new.gz database file. To start using it for integrity checks, remove the .new substring from the file name. 4.11.4. Additional Resources For additional information on AIDE, see the following documentation: aide(1) man page aide.conf(5) man page Guide to the Secure Configuration of Red Hat Enterprise Linux 7 (OpenSCAP Security Guide): Verify Integrity with AIDE | [
"~]# yum install aide",
"~]# aide --init AIDE, version 0.15.1 ### AIDE database at /var/lib/aide/aide.db.new.gz initialized.",
"~]# mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz",
"~]# aide --check AIDE 0.15.1 found differences between database and filesystem!! Start timestamp: 2017-03-30 14:12:56 Summary: Total number of files: 147173 Added files: 1 Removed files: 0 Changed files: 2",
"05 4 * * * root /usr/sbin/aide --check",
"~]# aide --update"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-using-aide |
7.281. xorg-x11-drv-intel | 7.281. xorg-x11-drv-intel 7.281.1. RHBA-2013:0303 - xorg-x11-drv-intel bug fix and enhancement update Updated xorg-x11-drv-intel packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The xorg-x11-drv-intel packages contain an Intel integrated graphics video driver for the X.Org implementation of the X Window System. Note The xorg-x11-drv-intel packages have been upgraded to upstream version 2.20.2, which provides a number of bug fixes and enhancements over the version. (BZ#835236) All users of xorg-x11-drv-intel are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/xorg-x11-drv-intel |
10.4. Customizing the Login Screen | 10.4. Customizing the Login Screen The GNOME Login Screen has several elements that can be customized. These changes can only be performed by a system administrator and affect all users. This section describes how to customize the greeter text, logo, keyboard layout, and user list. 10.4.1. Adding a Greeter Logo The greeter logo on the login screen is controlled by the org.gnome.login-screen.logo GSettings key. Since GDM uses its own dconf profile, you can add a greeter logo by changing the settings in that profile. For more information about GSettings and dconf , see Chapter 9, Configuring Desktop with GSettings and dconf . When choosing an appropriate picture for the logo to your login screen, consider the following picture requirements: All the major formats are supported: ANI, BPM, GIF, ICNS, ICO, JPEG, JPEG 2000, PCX, PNM, PBM, PGM, PPM, GTIFF, RAS, TGA, TIFF, XBM, WBMP, XPM, and SVG. The size of the picture scales proportionally to the height of 48 pixels. So, if you set the logo to 1920x1080, for example, it changes into an 85x48 thumbnail of the original picture. Procedure 10.6. Adding a logo to the login screen Create or edit the gdm profile in /etc/dconf/profile/gdm which contains the following lines: gdm is the name of a dconf database. Create a gdm database for machine-wide settings in /etc/dconf/db/gdm.d/ 01-logo : Replace /usr/share/pixmaps/logo/greeter-logo.png with the path to the image file you want to use as the greeter logo. Update the system databases: time you log in, the screen will show with the new login logo. Note What if the Logo Does Not Update? Make sure that you have run the dconf update command as root to update the system databases. In case the logo does not update, try restarting GDM . For more information, see Section 14.1.1, "Restarting GDM" . 10.4.2. Displaying a Text Banner The text banner on the login screen is controlled by the following GSettings keys (for more information about GSettings, see Chapter 9, Configuring Desktop with GSettings and dconf ): org.gnome.login-screen.banner-message-enable enables showing the banner message. org.gnome.login-screen.banner-message-text shows the text banner message in the login window. Note that since GDM uses its own dconf profile, you can configure the text banner by changing the settings in that profile. Procedure 10.7. Displaying a Text Banner on the Login Screen Create or edit the gdm profile in /etc/dconf/profile/gdm which contains the following lines: gdm is the name of a dconf database. Create a gdm database for machine-wide settings in /etc/dconf/db/gdm.d/ 01-banner-message : Note There is no character limit for the banner message. GNOME Shell autodetects longer stretches of text and enters two column mode. However, the banner message text cannot be read from an external file. Update the system databases: The banner text appears when you have selected yourself from the user list or when you start typing into the box. The time you log in you will see the text when inserting the password. 10.4.2.1. What if the Banner Message Does Not Update? If the banner message does not show, make sure you have run the dconf update command. In case the banner message does not update, try restarting GDM . For more information, see Section 14.1.1, "Restarting GDM" . 10.4.3. Displaying Multiple Keyboard Layouts You can add alternative keyboard layouts for users to chose from on the login screen. This can be helpful for users who normally use different keyboard layouts from the default and who want to have those keyboard layouts available at the login screen. Nevertheless, the selection only applies when using the login screen. Once you are logged in your own user settings take over. Procedure 10.8. Changing the System Keyboard Layout Settings Find the codes of the required language layouts in the /usr/share/X11/xkb/rules/base.lst file under the section named ! layout . Use the localectl tool to change the system keyboard layout settings as follows: USD localectl set-x11-keymap layout You can specify multiple layouts as a comma-separated list. For example, to set es as the default layout, and us as the secondary layout, run the following command: Log out to find that the defined layouts are available at the top bar on the login screen. Note that you can also use the localectl tool to specify the machine-wide default keyboard model, variant, and options. See the localectl (1) man page for more information. 10.4.4. Disabling the Login Screen User List You can disable the user list shown on the login screen by setting the org.gnome.login-screen.disable-user-list GSettings key. When the user list is disabled, users need to type their user name and password at the prompt to log in. Procedure 10.9. Setting the org.gnome.login-screen.disable-user-list Key Create or edit the gdm profile in /etc/dconf/profile/gdm which contains the following lines: gdm is the name of a dconf database. Create a gdm database for machine-wide settings in /etc/dconf/db/gdm.d/00-login-screen : Update the system databases by updating the dconf utility: | [
"user-db:user system-db:gdm file-db:/usr/share/gdm/greeter-dconf-defaults",
"[org/gnome/login-screen] logo=' /usr/share/pixmaps/logo/greeter-logo.png '",
"dconf update",
"user-db:user system-db:gdm file-db:/usr/share/gdm/greeter-dconf-defaults",
"[org/gnome/login-screen] banner-message-enable=true banner-message-text=' Type the banner message here '",
"dconf update",
"localectl set-x11-keymap es,us",
"user-db:user system-db:gdm file-db:/usr/share/gdm/greeter-dconf-defaults",
"Do not show the user list disable-user-list=true",
"dconf update"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/customizing-login-screen |
Release Notes | Release Notes Red Hat Trusted Profile Analyzer 1.3 Release notes for Red Hat Trusted Profile Analyzer 1.3.1 Red Hat Trusted Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1.3/html/release_notes/index |
3.2. Packages Required to Install a Client | 3.2. Packages Required to Install a Client Install the ipa-client package: The ipa-client package automatically installs other required packages as dependencies, such as the System Security Services Daemon (SSSD) packages. | [
"yum install ipa-client"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/client-automatic-required-packages |
Chapter 21. Job slicing | Chapter 21. Job slicing A sliced job refers to the concept of a distributed job. Distributed jobs are used for running a job across a large number of hosts, enabling you to run multiple ansible-playbooks, each on a subset of an inventory, that can be scheduled in parallel across a cluster. By default, Ansible runs jobs from a single control instance. For jobs that do not require cross-host orchestration, job slicing takes advantage of automation controller's ability to distribute work to multiple nodes in a cluster. Job slicing works by adding a Job Template field job_slice_count , which specifies the number of jobs into which to slice the Ansible run. When this number is greater than 1 , automation controller generates a workflow from a job template instead of a job. The inventory is distributed evenly amongst the slice jobs. The workflow job is then started, and proceeds as though it were a normal workflow. When launching a job, the API returns either a job resource (if job_slice_count = 1) or a workflow job resource. The corresponding User Interface (UI) redirects to the appropriate screen to display the status of the run. 21.1. Job slice considerations When setting up job slices, consider the following: A sliced job creates a workflow job, which then creates jobs. A job slice consists of a job template, an inventory, and a slice count. When executed, a sliced job splits each inventory into a number of "slice size" chunks. It then queues jobs of ansible-playbook runs on each chunk of the appropriate inventory. The inventory fed into ansible-playbook is a shortened version of the original inventory that only contains the hosts in that particular slice. The completed sliced job that displays on the Jobs list are labeled accordingly, with the number of sliced jobs that have run: These sliced jobs follow normal scheduling behavior (number of forks, queuing due to capacity, assignation to instance groups based on inventory mapping). Note Job slicing is intended to scale job executions horizontally. Enabling job slicing on a job template divides an inventory to be acted upon in the number of slices configured at launch time and then starts a job for each slice. Normally, the number of slices is equal to or less than the number of automation controller nodes. Setting an extremely high number of job slices, such as thousands, while permitted, can cause performance degradation as the job scheduler is not designed to simultaneously schedule thousands of workflow nodes, which are what the sliced jobs become. Sliced job templates with prompts or extra variables behave the same as standard job templates, applying all variables and limits to the entire set of slice jobs in the resulting workflow job. However, when passing a limit to a sliced job, if the limit causes slices to have no hosts assigned, those slices will fail, causing the overall job to fail. A job slice job status of a distributed job is calculated in the same manner as workflow jobs. It fails if there are any unhandled failures in its sub-jobs. Any job that intends to orchestrate across hosts (rather than just applying changes to individual hosts) must not be configured as a slice job. Any job that does, can fail, and automation controller does not attempt to discover or account for playbooks that fail when run as slice jobs. 21.2. Job slice execution behavior When jobs are sliced, they can run on any node. Insufficient capacity in the system can cause some to run at a different time. When slice jobs are running, job details display the workflow and job slices currently running, as well as a link to view their details individually. By default, job templates are not normally configured to execute simultaneously ( allow_simultaneous must be checked in the API or Enable Concurrent Jobs in the UI). Slicing overrides this behavior and implies allow_simultaneous even if that setting is clear. See Job templates for information on how to specify this, as well as the number of job slices on your job template configuration. The Job templates section provides additional detail on performing the following operations in the UI: Launch workflow jobs with a job template that has a slice number greater than one. Cancel the whole workflow or individual jobs after launching a slice job template. Relaunch the whole workflow or individual jobs after slice jobs finish running. View the details about the workflow and slice jobs after launching a job template. Search slice jobs specifically after you create them, as per the subsequent section, "Search job slices"). 21.3. Searching job slices To make it easier to find slice jobs, use the search functionality to apply a search filter to: Job lists to show only slice jobs Job lists to show only parent workflow jobs of job slices Job template lists to only show job templates that produce slice jobs Procedure Search for slice jobs by using one of the following methods: To show only slice jobs in job lists, as with most cases, you can filter either on the type (jobs here) or unified_jobs : /api/v2/jobs/?job_slice_count__gt=1 To show only parent workflow jobs of job slices: /api/v2/workflow_jobs/?job_template__isnull=false To show only job templates that produce slice jobs: /api/v2/job_templates/?job_slice_count__gt=1 | [
"/api/v2/jobs/?job_slice_count__gt=1",
"/api/v2/workflow_jobs/?job_template__isnull=false",
"/api/v2/job_templates/?job_slice_count__gt=1"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/controller-job-slicing |
Chapter 6. Scanning the system for configuration compliance and vulnerabilities | Chapter 6. Scanning the system for configuration compliance and vulnerabilities A compliance audit is a process of determining whether a given object follows all the rules specified in a compliance policy. The compliance policy is defined by security professionals who specify the required settings, often in the form of a checklist, that a computing environment should use. Compliance policies can vary substantially across organizations and even across different systems within the same organization. Differences among these policies are based on the purpose of each system and its importance for the organization. Custom software settings and deployment characteristics also raise a need for custom policy checklists. 6.1. Configuration compliance tools in RHEL You can perform a fully automated compliance audit in Red Hat Enterprise Linux by using the following configuration compliance tools. These tools are based on the Security Content Automation Protocol (SCAP) standard and are designed for automated tailoring of compliance policies. SCAP Workbench The scap-workbench graphical utility is designed to perform configuration and vulnerability scans on a single local or remote system. You can also use it to generate security reports based on these scans and evaluations. OpenSCAP The OpenSCAP library, with the accompanying oscap command-line utility, is designed to perform configuration and vulnerability scans on a local system, to validate configuration compliance content, and to generate reports and guides based on these scans and evaluations. Important You can experience memory-consumption problems while using OpenSCAP , which can cause stopping the program prematurely and prevent generating any result files. See the OpenSCAP memory-consumption problems Knowledgebase article for details. SCAP Security Guide (SSG) The scap-security-guide package provides collections of security policies for Linux systems. The guidance consists of a catalog of practical hardening advice, linked to government requirements where applicable. The project bridges the gap between generalized policy requirements and specific implementation guidelines. Script Check Engine (SCE) With SCE, which is an extension to the SCAP protocol, administrators can write their security content by using a scripting language, such as Bash, Python, and Ruby. The SCE extension is provided in the openscap-engine-sce package. The SCE itself is not part of the SCAP standard. To perform automated compliance audits on multiple systems remotely, you can use the OpenSCAP solution for Red Hat Satellite. Additional resources oscap(8) , scap-workbench(8) , and scap-security-guide(8) man pages on your system Red Hat Security Demos: Creating Customized Security Policy Content to Automate Security Compliance Red Hat Security Demos: Defend Yourself with RHEL Security Technologies Managing security compliance in Red Hat Satellite 6.2. Vulnerability scanning 6.2.1. Red Hat Security Advisories OVAL feed Red Hat Enterprise Linux security auditing capabilities are based on the Security Content Automation Protocol (SCAP) standard. SCAP is a multi-purpose framework of specifications that supports automated configuration, vulnerability and patch checking, technical control compliance activities, and security measurement. SCAP specifications create an ecosystem where the format of security content is well-known and standardized although the implementation of the scanner or policy editor is not mandated. This enables organizations to build their security policy (SCAP content) once, no matter how many security vendors they employ. The Open Vulnerability Assessment Language (OVAL) is the essential and oldest component of SCAP. Unlike other tools and custom scripts, OVAL describes a required state of resources in a declarative manner. OVAL code is never executed directly but using an OVAL interpreter tool called scanner. The declarative nature of OVAL ensures that the state of the assessed system is not accidentally modified. Like all other SCAP components, OVAL is based on XML. The SCAP standard defines several document formats. Each of them includes a different kind of information and serves a different purpose. Red Hat Product Security helps customers evaluate and manage risk by tracking and investigating all security issues affecting Red Hat customers. It provides timely and concise patches and security advisories on the Red Hat Customer Portal. Red Hat creates and supports OVAL patch definitions, providing machine-readable versions of our security advisories. Because of differences between platforms, versions, and other factors, Red Hat Product Security qualitative severity ratings of vulnerabilities do not directly align with the Common Vulnerability Scoring System (CVSS) baseline ratings provided by third parties. Therefore, we recommend that you use the RHSA OVAL definitions instead of those provided by third parties. The RHSA OVAL definitions are available individually and as a complete package, and are updated within an hour of a new security advisory being made available on the Red Hat Customer Portal. Each OVAL patch definition maps one-to-one to a Red Hat Security Advisory (RHSA). Because an RHSA can contain fixes for multiple vulnerabilities, each vulnerability is listed separately by its Common Vulnerabilities and Exposures (CVE) name and has a link to its entry in our public bug database. The RHSA OVAL definitions are designed to check for vulnerable versions of RPM packages installed on a system. It is possible to extend these definitions to include further checks, for example, to find out if the packages are being used in a vulnerable configuration. These definitions are designed to cover software and updates shipped by Red Hat. Additional definitions are required to detect the patch status of third-party software. Note The Red Hat Insights for Red Hat Enterprise Linux compliance service helps IT security and compliance administrators to assess, monitor, and report on the security policy compliance of Red Hat Enterprise Linux systems. You can also create and manage your SCAP security policies entirely within the compliance service UI. Additional resources Red Hat and OVAL compatibility Red Hat and CVE compatibility Notifications and Advisories in the Product Security Overview Security Data Metrics 6.2.2. Scanning the system for vulnerabilities The oscap command-line utility enables you to scan local systems, validate configuration compliance content, and generate reports and guides based on these scans and evaluations. This utility serves as a front end to the OpenSCAP library and groups its functionalities to modules (sub-commands) based on the type of SCAP content it processes. Prerequisites The openscap-scanner and bzip2 packages are installed. Procedure Download the latest RHSA OVAL definitions for your system: Scan the system for vulnerabilities and save results to the vulnerability.html file: Verification Check the results in a browser of your choice, for example: Additional resources oscap(8) man page on your system Red Hat OVAL definitions OpenSCAP memory consumption problems 6.2.3. Scanning remote systems for vulnerabilities You can check remote systems for vulnerabilities with the OpenSCAP scanner by using the oscap-ssh tool over the SSH protocol. Prerequisites The openscap-utils and bzip2 packages are installed on the system you use for scanning. The openscap-scanner package is installed on the remote systems. The SSH server is running on the remote systems. Procedure Download the latest RHSA OVAL definitions for your system: Scan a remote system for vulnerabilities and save the results to a file: Replace: <username> @ <hostname> with the user name and host name of the remote system. <port> with the port number through which you can access the remote system, for example, 22 . <scan-report.html> with the file name where oscap saves the scan results. Additional resources oscap-ssh(8) Red Hat OVAL definitions OpenSCAP memory consumption problems 6.3. Configuration compliance scanning 6.3.1. Configuration compliance in RHEL You can use configuration compliance scanning to conform to a baseline defined by a specific organization. For example, if you work with the US government, you might have to align your systems with the Operating System Protection Profile (OSPP), and if you are a payment processor, you might have to align your systems with the Payment Card Industry Data Security Standard (PCI-DSS). You can also perform configuration compliance scanning to harden your system security. Red Hat recommends you follow the Security Content Automation Protocol (SCAP) content provided in the SCAP Security Guide package because it is in line with Red Hat best practices for affected components. The SCAP Security Guide package provides content which conforms to the SCAP 1.2 and SCAP 1.3 standards. The openscap scanner utility is compatible with both SCAP 1.2 and SCAP 1.3 content provided in the SCAP Security Guide package. Important Performing a configuration compliance scanning does not guarantee the system is compliant. The SCAP Security Guide suite provides profiles for several platforms in a form of data stream documents. A data stream is a file that contains definitions, benchmarks, profiles, and individual rules. Each rule specifies the applicability and requirements for compliance. RHEL provides several profiles for compliance with security policies. In addition to the industry standard, Red Hat data streams also contain information for remediation of failed rules. Structure of compliance scanning resources A profile is a set of rules based on a security policy, such as OSPP, PCI-DSS, and Health Insurance Portability and Accountability Act (HIPAA). This enables you to audit the system in an automated way for compliance with security standards. You can modify (tailor) a profile to customize certain rules, for example, password length. For more information about profile tailoring, see Customizing a security profile with SCAP Workbench . 6.3.2. Possible results of an OpenSCAP scan Depending on the data stream and profile applied to an OpenSCAP scan, as well as various properties of your system, each rule may produce a specific result. These are the possible results with brief explanations of their meanings: Pass The scan did not find any conflicts with this rule. Fail The scan found a conflict with this rule. Not checked OpenSCAP does not perform an automatic evaluation of this rule. Check whether your system conforms to this rule manually. Not applicable This rule does not apply to the current configuration. Not selected This rule is not part of the profile. OpenSCAP does not evaluate this rule and does not display these rules in the results. Error The scan encountered an error. For additional information, you can enter the oscap command with the --verbose DEVEL option. File a support case on the Red Hat customer portal or open a ticket in the RHEL project in Red Hat Jira . Unknown The scan encountered an unexpected situation. For additional information, you can enter the oscap command with the `--verbose DEVEL option. File a support case on the Red Hat customer portal or open a ticket in the RHEL project in Red Hat Jira . 6.3.3. Viewing profiles for configuration compliance Before you decide to use profiles for scanning or remediation, you can list them and check their detailed descriptions using the oscap info subcommand. Prerequisites The openscap-scanner and scap-security-guide packages are installed. Procedure List all available files with security compliance profiles provided by the SCAP Security Guide project: Display detailed information about a selected data stream using the oscap info subcommand. XML files containing data streams are indicated by the -ds string in their names. In the Profiles section, you can find a list of available profiles and their IDs: Select a profile from the data stream file and display additional details about the selected profile. To do so, use oscap info with the --profile option followed by the last section of the ID displayed in the output of the command. For example, the ID of the HIPPA profile is xccdf_org.ssgproject.content_profile_hipaa , and the value for the --profile option is hipaa : Additional resources scap-security-guide(8) man page on your system OpenSCAP memory consumption problems 6.3.4. Assessing configuration compliance with a specific baseline You can determine whether your system or a remote system conforms to a specific baseline, and save the results in a report by using the oscap command-line tool. Prerequisites The openscap-scanner and scap-security-guide packages are installed. You know the ID of the profile within the baseline with which the system should comply. To find the ID, see the Viewing profiles for configuration compliance section. Procedure Scan the local system for compliance with the selected profile and save the scan results to a file: Replace: <scan-report.html> with the file name where oscap saves the scan results. <profileID> with the profile ID with which the system should comply, for example, hipaa . Optional: Scan a remote system for compliance with the selected profile and save the scan results to a file: Replace: <username> @ <hostname> with the user name and host name of the remote system. <port> with the port number through which you can access the remote system. <scan-report.html> with the file name where oscap saves the scan results. <profileID> with the profile ID with which the system should comply, for example, hipaa . Additional resources scap-security-guide(8) man page on your system SCAP Security Guide documentation in the /usr/share/doc/scap-security-guide/ directory /usr/share/doc/scap-security-guide/guides/ssg-rhel8-guide-index.html - [Guide to the Secure Configuration of Red Hat Enterprise Linux 8] installed with the scap-security-guide-doc package OpenSCAP memory consumption problems 6.4. Remediating the system to align with a specific baseline You can remediate the RHEL system to align with a specific baseline. You can remediate the system to align with any profile provided by the SCAP Security Guide. For the details on listing the available profiles, see the Viewing profiles for configuration compliance section. Warning If not used carefully, running the system evaluation with the Remediate option enabled might render the system non-functional. Red Hat does not provide any automated method to revert changes made by security-hardening remediations. Remediations are supported on RHEL systems in the default configuration. If your system has been altered after the installation, running remediation might not make it compliant with the required security profile. Prerequisites The scap-security-guide package is installed. Procedure Remediate the system by using the oscap command with the --remediate option: Replace <profileID> with the profile ID with which the system should comply, for example, hipaa . Restart your system. Verification Evaluate compliance of the system with the profile, and save the scan results to a file: Replace: <scan-report.html> with the file name where oscap saves the scan results. <profileID> with the profile ID with which the system should comply, for example, hipaa . Additional resources scap-security-guide(8) and oscap(8) man pages on your system Complementing the DISA benchmark using the SSG content Knowledgebase article 6.5. Remediating the system to align with a specific baseline using an SSG Ansible playbook You can remediate your system to align with a specific baseline by using an Ansible playbook file from the SCAP Security Guide project. This example uses the Health Insurance Portability and Accountability Act (HIPAA) profile, but you can remediate to align with any other profile provided by the SCAP Security Guide. For the details on listing the available profiles, see the Viewing profiles for configuration compliance section. Warning If not used carefully, running the system evaluation with the Remediate option enabled might render the system non-functional. Red Hat does not provide any automated method to revert changes made by security-hardening remediations. Remediations are supported on RHEL systems in the default configuration. If your system has been altered after the installation, running remediation might not make it compliant with the required security profile. Prerequisites The scap-security-guide package is installed. The ansible-core package is installed. See the Ansible Installation Guide for more information. RHEL 8.6 or later is installed. For more information about installing RHEL, see Interactively installing RHEL from installation media . Note In RHEL 8.5 and earlier versions, Ansible packages were provided through Ansible Engine instead of Ansible Core, and with a different level of support. Do not use Ansible Engine because the packages might not be compatible with Ansible automation content in RHEL 8.6 and later. For more information, see Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories . Procedure Remediate your system to align with HIPAA by using Ansible: Restart the system. Verification Evaluate the compliance of the system with the HIPAA profile, and save the scan results to a file: Replace <scan-report.html> with the file name where oscap saves the scan results. Additional resources scap-security-guide(8) and oscap(8) man pages on your system Ansible Documentation 6.6. Creating a remediation Ansible playbook to align the system with a specific baseline You can create an Ansible playbook containing only the remediations that are required to align your system with a specific baseline. This playbook is smaller because it does not cover already satisfied requirements. Creating the playbook does not modify your system in any way, you only prepare a file for later application. This example uses the Health Insurance Portability and Accountability Act (HIPAA) profile. Note In RHEL 8.6, Ansible Engine is replaced by the ansible-core package, which contains only built-in modules. Note that many Ansible remediations use modules from the community and Portable Operating System Interface (POSIX) collections, which are not included in the built-in modules. In this case, you can use Bash remediations as a substitute for Ansible remediations. The Red Hat Connector in RHEL 8.6 includes the Ansible modules necessary for the remediation playbooks to function with Ansible Core. Prerequisites The scap-security-guide package is installed. Procedure Scan the system and save the results: Find the value of the result ID in the file with the results: Generate an Ansible playbook based on the file generated in step 1: Review the generated file, which contains the Ansible remediations for rules that failed during the scan performed in step 1. After reviewing this generated file, you can apply it by using the ansible-playbook <hipaa-remediations.yml> command. Verification In a text editor of your choice, review that the generated <hipaa-remediations.yml> file contains rules that failed in the scan performed in step 1. Additional resources scap-security-guide(8) and oscap(8) man pages on your system Ansible Documentation 6.7. Creating a remediation Bash script for a later application Use this procedure to create a Bash script containing remediations that align your system with a security profile such as HIPAA. Using the following steps, you do not do any modifications to your system, you only prepare a file for later application. Prerequisites The scap-security-guide package is installed on your RHEL system. Procedure Use the oscap command to scan the system and to save the results to an XML file. In the following example, oscap evaluates the system against the hipaa profile: Find the value of the result ID in the file with the results: Generate a Bash script based on the results file generated in step 1: The <hipaa-remediations.sh> file contains remediations for rules that failed during the scan performed in step 1. After reviewing this generated file, you can apply it with the ./ <hipaa-remediations.sh> command when you are in the same directory as this file. Verification In a text editor of your choice, review that the <hipaa-remediations.sh> file contains rules that failed in the scan performed in step 1. Additional resources scap-security-guide(8) , oscap(8) , and bash(1) man pages on your system 6.8. Scanning the system with a customized profile using SCAP Workbench SCAP Workbench , which is contained in the scap-workbench package, is a graphical utility that enables users to perform configuration and vulnerability scans on a single local or a remote system, perform remediation of the system, and generate reports based on scan evaluations. Note that SCAP Workbench has limited functionality compared with the oscap command-line utility. SCAP Workbench processes security content in the form of data stream files. 6.8.1. Using SCAP Workbench to scan and remediate the system To evaluate your system against the selected security policy, use the following procedure. Prerequisites The scap-workbench package is installed on your system. Procedure To run SCAP Workbench from the GNOME Classic desktop environment, press the Super key to enter the Activities Overview , type scap-workbench , and then press Enter . Alternatively, use: Select a security policy using either of the following options: Load Content button on the starting window Open content from SCAP Security Guide Open Other Content in the File menu, and search the respective XCCDF, SCAP RPM, or data stream file. You can allow automatic correction of the system configuration by selecting the Remediate check box. With this option enabled, SCAP Workbench attempts to change the system configuration in accordance with the security rules applied by the policy. This process should fix the related checks that fail during the system scan. Warning If not used carefully, running the system evaluation with the Remediate option enabled might render the system non-functional. Red Hat does not provide any automated method to revert changes made by security-hardening remediations. Remediations are supported on RHEL systems in the default configuration. If your system has been altered after the installation, running remediation might not make it compliant with the required security profile. Scan your system with the selected profile by clicking the Scan button. To store the scan results in form of an XCCDF, ARF, or HTML file, click the Save Results combo box. Choose the HTML Report option to generate the scan report in human-readable format. The XCCDF and ARF (data stream) formats are suitable for further automatic processing. You can repeatedly choose all three options. To export results-based remediations to a file, use the Generate remediation role pop-up menu. 6.8.2. Customizing a security profile with SCAP Workbench You can customize a security profile by changing parameters in certain rules (for example, minimum password length), removing rules that you cover in a different way, and selecting additional rules, to implement internal policies. You cannot define new rules by customizing a profile. The following procedure demonstrates the use of SCAP Workbench for customizing (tailoring) a profile. You can also save the tailored profile for use with the oscap command-line utility. Prerequisites The scap-workbench package is installed on your system. Procedure Run SCAP Workbench , and select the profile to customize by using either Open content from SCAP Security Guide or Open Other Content in the File menu. To adjust the selected security profile according to your needs, click the Customize button. This opens the new Customization window that enables you to modify the currently selected profile without changing the original data stream file. Choose a new profile ID. Find a rule to modify using either the tree structure with rules organized into logical groups or the Search field. Include or exclude rules using check boxes in the tree structure, or modify values in rules where applicable. Confirm the changes by clicking the OK button. To store your changes permanently, use one of the following options: Save a customization file separately by using Save Customization Only in the File menu. Save all security content at once by Save All in the File menu. If you select the Into a directory option, SCAP Workbench saves both the data stream file and the customization file to the specified location. You can use this as a backup solution. By selecting the As RPM option, you can instruct SCAP Workbench to create an RPM package containing the data stream file and the customization file. This is useful for distributing the security content to systems that cannot be scanned remotely, and for delivering the content for further processing. Note Because SCAP Workbench does not support results-based remediations for tailored profiles, use the exported remediations with the oscap command-line utility. 6.8.3. Additional resources scap-workbench(8) man page on your system /usr/share/doc/scap-workbench/user_manual.html file provided by the scap-workbench package Deploy customized SCAP policies with Satellite 6.x (Red Hat Knowledgebase) 6.9. Deploying systems that are compliant with a security profile immediately after an installation You can use the OpenSCAP suite to deploy RHEL systems that are compliant with a security profile, such as OSPP, PCI-DSS, and HIPAA profile, immediately after the installation process. Using this deployment method, you can apply specific rules that cannot be applied later using remediation scripts, for example, a rule for password strength and partitioning. 6.9.1. Profiles not compatible with Server with GUI Certain security profiles provided as part of the SCAP Security Guide are not compatible with the extended package set included in the Server with GUI base environment. Therefore, do not select Server with GUI when installing systems compliant with one of the following profiles: Table 6.1. Profiles not compatible with Server with GUI Profile name Profile ID Justification Notes CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Server xccdf_org.ssgproject.content_profile_ cis Packages xorg-x11-server-Xorg , xorg-x11-server-common , xorg-x11-server-utils , and xorg-x11-server-Xwayland are part of the Server with GUI package set, but the policy requires their removal. CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Server xccdf_org.ssgproject.content_profile_ cis_server_l1 Packages xorg-x11-server-Xorg , xorg-x11-server-common , xorg-x11-server-utils , and xorg-x11-server-Xwayland are part of the Server with GUI package set, but the policy requires their removal. Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) xccdf_org.ssgproject.content_profile_ cui The nfs-utils package is part of the Server with GUI package set, but the policy requires its removal. Protection Profile for General Purpose Operating Systems xccdf_org.ssgproject.content_profile_ ospp The nfs-utils package is part of the Server with GUI package set, but the policy requires its removal. DISA STIG for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig Packages xorg-x11-server-Xorg , xorg-x11-server-common , xorg-x11-server-utils , and xorg-x11-server-Xwayland are part of the Server with GUI package set, but the policy requires their removal. To install a RHEL system as a Server with GUI aligned with DISA STIG in RHEL version 8.4 and later, you can use the DISA STIG with GUI profile. 6.9.2. Deploying baseline-compliant RHEL systems using the graphical installation Use this procedure to deploy a RHEL system that is aligned with a specific baseline. This example uses Protection Profile for General Purpose Operating System (OSPP). Warning Certain security profiles provided as part of the SCAP Security Guide are not compatible with the extended package set included in the Server with GUI base environment. For additional details, see Profiles not compatible with a GUI server . Prerequisites You have booted into the graphical installation program. Note that the OSCAP Anaconda Add-on does not support interactive text-only installation. You have accessed the Installation Summary window. Procedure From the Installation Summary window, click Software Selection . The Software Selection window opens. From the Base Environment pane, select the Server environment. You can select only one base environment. Click Done to apply the setting and return to the Installation Summary window. Because OSPP has strict partitioning requirements that must be met, create separate partitions for /boot , /home , /var , /tmp , /var/log , /var/tmp , and /var/log/audit . Click Security Policy . The Security Policy window opens. To enable security policies on the system, toggle the Apply security policy switch to ON . Select Protection Profile for General Purpose Operating Systems from the profile pane. Click Select Profile to confirm the selection. Confirm the changes in the Changes that were done or need to be done pane that is displayed at the bottom of the window. Complete any remaining manual changes. Complete the graphical installation process. Note The graphical installation program automatically creates a corresponding Kickstart file after a successful installation. You can use the /root/anaconda-ks.cfg file to automatically install OSPP-compliant systems. Verification To check the current status of the system after installation is complete, reboot the system and start a new scan: Additional resources Configuring manual partitioning 6.9.3. Deploying baseline-compliant RHEL systems using Kickstart You can deploy RHEL systems that are aligned with a specific baseline. This example uses Protection Profile for General Purpose Operating System (OSPP). Prerequisites The scap-security-guide package is installed on your RHEL 8 system. Procedure Open the /usr/share/scap-security-guide/kickstart/ssg-rhel8-ospp-ks.cfg Kickstart file in an editor of your choice. Update the partitioning scheme to fit your configuration requirements. For OSPP compliance, the separate partitions for /boot , /home , /var , /tmp , /var/log , /var/tmp , and /var/log/audit must be preserved, and you can only change the size of the partitions. Start a Kickstart installation as described in Performing an automated installation using Kickstart . Important Passwords in Kickstart files are not checked for OSPP requirements. Verification To check the current status of the system after installation is complete, reboot the system and start a new scan: Additional resources OSCAP Anaconda Add-on Kickstart commands and options reference: %addon org_fedora_oscap 6.10. Scanning container and container images for vulnerabilities Use this procedure to find security vulnerabilities in a container or a container image. Note The oscap-podman command is available from RHEL 8.2. For RHEL 8.1 and 8.0, use the workaround described in the Using OpenSCAP for scanning containers in RHEL 8 Knowledgebase article. Prerequisites The openscap-utils and bzip2 packages are installed. Procedure Download the latest RHSA OVAL definitions for your system: Get the ID of a container or a container image, for example: Scan the container or the container image for vulnerabilities and save results to the vulnerability.html file: Note that the oscap-podman command requires root privileges, and the ID of a container is the first argument. Verification Check the results in a browser of your choice, for example: Additional resources For more information, see the oscap-podman(8) and oscap(8) man pages. 6.11. Assessing security compliance of a container or a container image with a specific baseline You can assess the compliance of your container or a container image with a specific security baseline, such as Operating System Protection Profile (OSPP), Payment Card Industry Data Security Standard (PCI-DSS), and Health Insurance Portability and Accountability Act (HIPAA). Note The oscap-podman command is available from RHEL 8.2. For RHEL 8.1 and 8.0, use the workaround described in the Using OpenSCAP for scanning containers in RHEL 8 Knowledgebase article. Prerequisites The openscap-utils and scap-security-guide packages are installed. You have root access to the system. Procedure Find the ID of a container or a container image: To find the ID of a container, enter the podman ps -a command. To find the ID of a container image, enter the podman images command. Evaluate the compliance of the container or container image with a profile and save the scan results into a file: Replace: <ID> with the ID of your container or container image <scan-report.html> with the file name where oscap saves the scan results <profileID> with the profile ID with which the system should comply, for example, hipaa , ospp , or pci-dss Verification Check the results in a browser of your choice, for example: Note The rules marked as notapplicable apply only to bare-metal and virtualized systems and not to containers or container images. Additional resources oscap-podman(8) and scap-security-guide(8) man pages. /usr/share/doc/scap-security-guide/ directory. 6.12. SCAP Security Guide profiles supported in RHEL 8 Use only the SCAP content provided in the particular minor release of RHEL. This is because components that participate in hardening are sometimes updated with new capabilities. SCAP content changes to reflect these updates, but it is not always backward compatible. In the following tables, you can find the profiles provided in each minor version of RHEL, together with the version of the policy with which the profile aligns. Table 6.2. SCAP Security Guide profiles supported in RHEL 8.10 Profile name Profile ID Policy version French National Agency for the Security of Information Systems (ANSSI) BP-028 Enhanced Level xccdf_org.ssgproject.content_profile_ anssi_bp28_enhanced 2.0 French National Agency for the Security of Information Systems (ANSSI) BP-028 High Level xccdf_org.ssgproject.content_profile_ anssi_bp28_high 2.0 French National Agency for the Security of Information Systems (ANSSI) BP-028 Intermediary Level xccdf_org.ssgproject.content_profile_ anssi_bp28_intermediary 2.0 French National Agency for the Security of Information Systems (ANSSI) BP-028 Minimal Level xccdf_org.ssgproject.content_profile_ anssi_bp28_minimal 2.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Server xccdf_org.ssgproject.content_profile_ cis 3.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Server xccdf_org.ssgproject.content_profile_ cis_server_l1 3.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Workstation xccdf_org.ssgproject.content_profile_ cis_workstation_l1 3.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Workstation xccdf_org.ssgproject.content_profile_ cis_workstation_l2 3.0.0 Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) xccdf_org.ssgproject.content_profile_ cui r1 Australian Cyber Security Centre (ACSC) Essential Eight xccdf_org.ssgproject.content_profile_ e8 not versioned Health Insurance Portability and Accountability Act (HIPAA) xccdf_org.ssgproject.content_profile_ hipaa not versioned Australian Cyber Security Centre (ACSC) ISM Official xccdf_org.ssgproject.content_profile_ ism_o not versioned Protection Profile for General Purpose Operating Systems xccdf_org.ssgproject.content_profile_ ospp 4.2.1 PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ pci-dss RHEL 8.10.0 to RHEL 8.10.4:4.0 {RHEL 8.10.5 and later:4.0.1 The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig RHEL 8.10.0:V1R13 RHEL 8.10.1 to RHEL 8.10.4:V1R14 RHEL 8.10.5 and later:V2R1 The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) with GUI for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig_gui RHEL 8.10.0:V1R13 RHEL 8.10.1 to RHEL 8.10.4:V1R14 RHEL 8.10.5 and later:V2R1 Table 6.3. SCAP Security Guide profiles supported in RHEL 8.9 Profile name Profile ID Policy version French National Agency for the Security of Information Systems (ANSSI) BP-028 Enhanced Level xccdf_org.ssgproject.content_profile_ anssi_bp28_enhanced 2.0 French National Agency for the Security of Information Systems (ANSSI) BP-028 High Level xccdf_org.ssgproject.content_profile_ anssi_bp28_high 2.0 French National Agency for the Security of Information Systems (ANSSI) BP-028 Intermediary Level xccdf_org.ssgproject.content_profile_ anssi_bp28_intermediary 2.0 French National Agency for the Security of Information Systems (ANSSI) BP-028 Minimal Level xccdf_org.ssgproject.content_profile_ anssi_bp28_minimal 2.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Server xccdf_org.ssgproject.content_profile_ cis RHEL 8.9.0 and RHEL 8.9.2:2.0.0 RHEL 8.9.3:3.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Server xccdf_org.ssgproject.content_profile_ cis_server_l1 RHEL 8.9.0 and RHEL 8.9.2:2.0.0 RHEL 8.9.3:3.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Workstation xccdf_org.ssgproject.content_profile_ cis_workstation_l1 RHEL 8.9.0 and RHEL 8.9.2:2.0.0 RHEL 8.9.3:3.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Workstation xccdf_org.ssgproject.content_profile_ cis_workstation_l2 RHEL 8.9.0 and RHEL 8.9.2:2.0.0 RHEL 8.9.3:3.0.0 Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) xccdf_org.ssgproject.content_profile_ cui r1 Australian Cyber Security Centre (ACSC) Essential Eight xccdf_org.ssgproject.content_profile_ e8 not versioned Health Insurance Portability and Accountability Act (HIPAA) xccdf_org.ssgproject.content_profile_ hipaa not versioned Australian Cyber Security Centre (ACSC) ISM Official xccdf_org.ssgproject.content_profile_ ism_o not versioned Protection Profile for General Purpose Operating Systems xccdf_org.ssgproject.content_profile_ ospp 4.2.1 PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ pci-dss RHEL 8.9.0 and RHEL 8.9.2:3.2.1 RHEL 8.9.3:4.0 The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig RHEL 8.9.0 and RHEL 8.9.2:V1R11 RHEL 8.9.3:V1R13 The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) with GUI for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig_gui RHEL 8.9.0 and RHEL 8.9.2:V1R11 RHEL 8.9.3:V1R13 Table 6.4. SCAP Security Guide profiles supported in RHEL 8.8 Profile name Profile ID Policy version French National Agency for the Security of Information Systems (ANSSI) BP-028 Enhanced Level xccdf_org.ssgproject.content_profile_ anssi_bp28_enhanced 2.0 French National Agency for the Security of Information Systems (ANSSI) BP-028 High Level xccdf_org.ssgproject.content_profile_ anssi_bp28_high 2.0 French National Agency for the Security of Information Systems (ANSSI) BP-028 Intermediary Level xccdf_org.ssgproject.content_profile_ anssi_bp28_intermediary 2.0 French National Agency for the Security of Information Systems (ANSSI) BP-028 Minimal Level xccdf_org.ssgproject.content_profile_ anssi_bp28_minimal 2.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Server xccdf_org.ssgproject.content_profile_ cis RHEL 8.8.0 and RHEL 8.8.5:2.0.0 RHEL 8.8.6:3.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Server xccdf_org.ssgproject.content_profile_ cis_server_l1 RHEL 8.8.0 and RHEL 8.8.5:2.0.0 RHEL 8.8.6:3.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Workstation xccdf_org.ssgproject.content_profile_ cis_workstation_l1 RHEL 8.8.0 and RHEL 8.8.5:2.0.0 RHEL 8.8.6:3.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Workstation xccdf_org.ssgproject.content_profile_ cis_workstation_l2 RHEL 8.8.0 and RHEL 8.8.5:2.0.0 RHEL 8.8.6:3.0.0 Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) xccdf_org.ssgproject.content_profile_ cui r1 Australian Cyber Security Centre (ACSC) Essential Eight xccdf_org.ssgproject.content_profile_ e8 not versioned Health Insurance Portability and Accountability Act (HIPAA) xccdf_org.ssgproject.content_profile_ hipaa not versioned Australian Cyber Security Centre (ACSC) ISM Official xccdf_org.ssgproject.content_profile_ ism_o not versioned Protection Profile for General Purpose Operating Systems xccdf_org.ssgproject.content_profile_ ospp 4.2.1 PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ pci-dss RHEL 8.8.0 and RHEL 8.8.5:3.2.1 RHEL 8.8.6 to RHEL 8.8.12:4.0 RHEL 8.8.13 and later:4.0.1 The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig RHEL 8.8.0 to RHEL 8.8.5:V1R9 RHEL 8.8.6 to RHEL 8.8.7:V1R13 RHEL 8.8.8 to RHEL 8.8.12:V1R14 RHEL 8.8.13 and later:V2R1 The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) with GUI for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig_gui RHEL 8.8.0 to RHEL 8.8.5:V1R9 RHEL 8.8.6 to RHEL 8.8.7:V1R13 RHEL 8.8.8 to RHEL 8.8.12:V1R14 RHEL 8.8.13 and later:V2R1 Table 6.5. SCAP Security Guide profiles supported in RHEL 8.7 Profile name Profile ID Policy version French National Agency for the Security of Information Systems (ANSSI) BP-028 Enhanced Level xccdf_org.ssgproject.content_profile_ anssi_bp28_enhanced 1.2 French National Agency for the Security of Information Systems (ANSSI) BP-028 High Level xccdf_org.ssgproject.content_profile_ anssi_bp28_high 1.2 French National Agency for the Security of Information Systems (ANSSI) BP-028 Intermediary Level xccdf_org.ssgproject.content_profile_ anssi_bp28_intermediary 1.2 French National Agency for the Security of Information Systems (ANSSI) BP-028 Minimal Level xccdf_org.ssgproject.content_profile_ anssi_bp28_minimal 1.2 CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Server xccdf_org.ssgproject.content_profile_ cis 2.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Server xccdf_org.ssgproject.content_profile_ cis_server_l1 2.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Workstation xccdf_org.ssgproject.content_profile_ cis_workstation_l1 2.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Workstation xccdf_org.ssgproject.content_profile_ cis_workstation_l2 2.0.0 Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) xccdf_org.ssgproject.content_profile_ cui r1 Australian Cyber Security Centre (ACSC) Essential Eight xccdf_org.ssgproject.content_profile_ e8 not versioned Health Insurance Portability and Accountability Act (HIPAA) xccdf_org.ssgproject.content_profile_ hipaa not versioned Australian Cyber Security Centre (ACSC) ISM Official xccdf_org.ssgproject.content_profile_ ism_o not versioned Protection Profile for General Purpose Operating Systems xccdf_org.ssgproject.content_profile_ ospp 4.2.1 PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ pci-dss 3.2.1 The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig RHEL 8.7.0 and RHEL 8.7.1:V1R7 RHEL 8.7.2 and later:V1R9 The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) with GUI for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig_gui RHEL 8.7.0 and RHEL 8.7.1:V1R7 RHEL 8.7.2 and later:V1R9 Table 6.6. SCAP Security Guide profiles supported in RHEL 8.6 Profile name Profile ID Policy version French National Agency for the Security of Information Systems (ANSSI) BP-028 Enhanced Level xccdf_org.ssgproject.content_profile_ anssi_bp28_enhanced RHEL 8.6.0 to 8.6.10:1.2 RHEL 8.6.11 and later:2.0 French National Agency for the Security of Information Systems (ANSSI) BP-028 High Level xccdf_org.ssgproject.content_profile_ anssi_bp28_high RHEL 8.6.0 to 8.6.10:1.2 RHEL 8.6.11 and later:2.0 French National Agency for the Security of Information Systems (ANSSI) BP-028 Intermediary Level xccdf_org.ssgproject.content_profile_ anssi_bp28_intermediary RHEL 8.6.0 to 8.6.10:1.2 RHEL 8.6.11 and later:2.0 French National Agency for the Security of Information Systems (ANSSI) BP-028 Minimal Level xccdf_org.ssgproject.content_profile_ anssi_bp28_minimal RHEL 8.6.0 to 8.6.10:1.2 RHEL 8.6.11 and later:2.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Server xccdf_org.ssgproject.content_profile_ cis RHEL 8.6.0 to RHEL 8.6.2:1.0.0 RHEL 8.6.3 to RHEL 8.6.15:2.0.0 RHEL 8.6.16 and later:3.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Server xccdf_org.ssgproject.content_profile_ cis_server_l1 RHEL 8.6.0 to RHEL 8.6.2:1.0.0 RHEL 8.6.3 to RHEL 8.6.15:2.0.0 RHEL 8.6.16 and later:3.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Workstation xccdf_org.ssgproject.content_profile_ cis_workstation_l1 RHEL 8.6.0 to RHEL 8.6.2:1.0.0 RHEL 8.6.3 to RHEL 8.6.15:2.0.0 RHEL 8.6.16 and later:3.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Workstation xccdf_org.ssgproject.content_profile_ cis_workstation_l2 RHEL 8.6.0 to RHEL 8.6.2:1.0.0 RHEL 8.6.3 to RHEL 8.6.15:2.0.0 RHEL 8.6.16 and later:3.0.0 Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) xccdf_org.ssgproject.content_profile_ cui r1 Australian Cyber Security Centre (ACSC) Essential Eight xccdf_org.ssgproject.content_profile_ e8 not versioned Health Insurance Portability and Accountability Act (HIPAA) xccdf_org.ssgproject.content_profile_ hipaa not versioned Australian Cyber Security Centre (ACSC) ISM Official xccdf_org.ssgproject.content_profile_ ism_o not versioned Protection Profile for General Purpose Operating Systems xccdf_org.ssgproject.content_profile_ ospp 4.2.1 PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ pci-dss 3.2.1 The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig RHEL 8.6.0:V1R5 RHEL 8.6.1 and RHEL 8.6.2:V1R6 RHEL 8.6.3 to RHEL 8.6.6:V1R7 RHEL 8.6.7 to RHEL 8.6.10:V1R9 RHEL 8.6.11 to RHEL 8.6.15:V1R11 RHEL 8.6.16 and RHEL 8.6.17:V1R13 RHEL 8.6.18 and later:V1R14 The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) with GUI for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig_gui RHEL 8.6.0:V1R5 RHEL 8.6.1 and RHEL 8.6.2:V1R6 RHEL 8.6.3 to RHEL 8.6.6:V1R7 RHEL 8.6.7 to RHEL 8.6.10:V1R9 RHEL 8.6.11 to RHEL 8.6.15:V1R11 RHEL 8.6.16 and RHEL 8.6.17:V1R13 RHEL 8.6.18 and later:V1R14 Table 6.7. SCAP Security Guide profiles supported in RHEL 8.5 Profile name Profile ID Policy version French National Agency for the Security of Information Systems (ANSSI) BP-028 Enhanced Level xccdf_org.ssgproject.content_profile_ anssi_bp28_enhanced 1.2 French National Agency for the Security of Information Systems (ANSSI) BP-028 High Level xccdf_org.ssgproject.content_profile_ anssi_bp28_high 1.2 French National Agency for the Security of Information Systems (ANSSI) BP-028 Intermediary Level xccdf_org.ssgproject.content_profile_ anssi_bp28_intermediary 1.2 French National Agency for the Security of Information Systems (ANSSI) BP-028 Minimal Level xccdf_org.ssgproject.content_profile_ anssi_bp28_minimal 1.2 CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Server xccdf_org.ssgproject.content_profile_ cis 1.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Server xccdf_org.ssgproject.content_profile_ cis_server_l1 1.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Workstation xccdf_org.ssgproject.content_profile_ cis_workstation_l1 1.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Workstation xccdf_org.ssgproject.content_profile_ cis_workstation_l2 1.0.0 Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) xccdf_org.ssgproject.content_profile_ cui r1 Australian Cyber Security Centre (ACSC) Essential Eight xccdf_org.ssgproject.content_profile_ e8 not versioned Health Insurance Portability and Accountability Act (HIPAA) xccdf_org.ssgproject.content_profile_ hipaa not versioned Australian Cyber Security Centre (ACSC) ISM Official xccdf_org.ssgproject.content_profile_ ism_o not versioned Protection Profile for General Purpose Operating Systems xccdf_org.ssgproject.content_profile_ ospp 4.2.1 PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ pci-dss 3.2.1 The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig RHEL 8.5.0 to RHEL 8.5.3:V1R3 RHEL 8.5.4 and later:V1R5 The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) with GUI for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig_gui RHEL 8.5.0 to RHEL 8.5.3:V1R3 RHEL 8.5.4 and later:V1R5 Table 6.8. SCAP Security Guide profiles supported in RHEL 8.4 Profile name Profile ID Policy version French National Agency for the Security of Information Systems (ANSSI) BP-028 Enhanced Level xccdf_org.ssgproject.content_profile_ anssi_bp28_enhanced 1.2 French National Agency for the Security of Information Systems (ANSSI) BP-028 High Level xccdf_org.ssgproject.content_profile_ anssi_bp28_high RHEL 8.4.4 and later:1.2 French National Agency for the Security of Information Systems (ANSSI) BP-028 Intermediary Level xccdf_org.ssgproject.content_profile_ anssi_bp28_intermediary 1.2 French National Agency for the Security of Information Systems (ANSSI) BP-028 Minimal Level xccdf_org.ssgproject.content_profile_ anssi_bp28_minimal 1.2 CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Server xccdf_org.ssgproject.content_profile_ cis RHEL 8.4.3 and earlier:1.0.0 RHEL 8.4.4 to RHEL 8.4.10:1.0.1 RHEL 8.4.11 and later:2.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Server xccdf_org.ssgproject.content_profile_ cis_server_l1 RHEL 8.4.4 to RHEL 8.4.10:1.0.1 RHEL 8.4.11 and later:2.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 1 - Workstation xccdf_org.ssgproject.content_profile_ cis_workstation_l1 RHEL 8.4.4 to RHEL 8.4.10:1.0.1 RHEL 8.4.11 and later:2.0.0 CIS Red Hat Enterprise Linux 8 Benchmark for Level 2 - Workstation xccdf_org.ssgproject.content_profile_ cis_workstation_l2 RHEL 8.4.4 to RHEL 8.4.10:1.0.1 RHEL 8.4.11 and later:2.0.0 Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) xccdf_org.ssgproject.content_profile_ cui r1 Australian Cyber Security Centre (ACSC) Essential Eight xccdf_org.ssgproject.content_profile_ e8 not versioned Australian Cyber Security Centre (ACSC) ISM Official xccdf_org.ssgproject.content_profile_ ism_o RHEL 8.4.4 and later:not versioned Health Insurance Portability and Accountability Act (HIPAA) xccdf_org.ssgproject.content_profile_ hipaa not versioned Protection Profile for General Purpose Operating Systems xccdf_org.ssgproject.content_profile_ ospp 4.2.1 PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ pci-dss 3.2.1 The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig RHEL 8.4.3 and earlier:V1R1 RHEL 8.4.4 to RHEL 8.4.7:V1R3 RHEL 8.4.8:V1R5 RHEL 8.4.9 to RHEL 8.4.10:V1R6 RHEL 8.4.11 to RHEL 8.4.14:V1R7 RHEL 8.4.15 and later:V1R9 The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) with GUI for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig_gui RHEL 8.4.4 to RHEL 8.4.7:V1R3 RHEL 8.4.8:V1R5 RHEL 8.4.9 to RHEL 8.4.10:V1R6 RHEL 8.4.11 to RHEL 8.4.14:V1R7 RHEL 8.4.15 and later:V1R9 Table 6.9. SCAP Security Guide profiles supported in RHEL 8.3 Profile name Profile ID Policy version CIS Red Hat Enterprise Linux 8 Benchmark xccdf_org.ssgproject.content_profile_ cis 1.0.0 Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) xccdf_org.ssgproject.content_profile_ cui r1 Australian Cyber Security Centre (ACSC) Essential Eight xccdf_org.ssgproject.content_profile_ e8 not versioned Health Insurance Portability and Accountability Act (HIPAA) xccdf_org.ssgproject.content_profile_ hipaa not versioned Protection Profile for General Purpose Operating Systems xccdf_org.ssgproject.content_profile_ ospp 4.2.1 PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ pci-dss 3.2.1 [DRAFT] The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig draft Table 6.10. SCAP Security Guide profiles supported in RHEL 8.2 Profile name Profile ID Policy version Australian Cyber Security Centre (ACSC) Essential Eight xccdf_org.ssgproject.content_profile_ e8 not versioned Protection Profile for General Purpose Operating Systems xccdf_org.ssgproject.content_profile_ ospp 4.2.1 PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ pci-dss 3.2.1 [DRAFT] DISA STIG for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ stig draft Table 6.11. SCAP Security Guide profiles supported in RHEL 8.1 Profile name Profile ID Policy version Protection Profile for General Purpose Operating Systems xccdf_org.ssgproject.content_profile_ ospp 4.2.1 PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ pci-dss 3.2.1 Table 6.12. SCAP Security Guide profiles supported in RHEL 8.0 Profile name Profile ID Policy version OSPP - Protection Profile for General Purpose Operating Systems xccdf_org.ssgproject.content_profile_ ospp draft PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 8 xccdf_org.ssgproject.content_profile_ pci-dss 3.2.1 6.13. Additional resources Supported versions of the SCAP Security Guide in RHEL The OpenSCAP project page provides detailed information about the oscap utility and other components and projects related to SCAP. The SCAP Workbench project page provides detailed information about the scap-workbench application. The SCAP Security Guide (SSG) project page provides the latest security content for Red Hat Enterprise Linux. Using OpenSCAP for security compliance and vulnerability scanning - A hands-on lab on running tools based on the Security Content Automation Protocol (SCAP) standard for compliance and vulnerability scanning in RHEL. Red Hat Security Demos: Creating Customized Security Policy Content to Automate Security Compliance - A hands-on lab to get initial experience in automating security compliance using the tools that are included in RHEL to comply with both industry standard security policies and custom security policies. If you want training or access to these lab exercises for your team, contact your Red Hat account team for additional details. Red Hat Security Demos: Defend Yourself with RHEL Security Technologies - A hands-on lab to learn how to implement security at all levels of your RHEL system, using the key security technologies available to you in RHEL, including OpenSCAP. If you want training or access to these lab exercises for your team, contact your Red Hat account team for additional details. National Institute of Standards and Technology (NIST) SCAP page has a vast collection of SCAP-related materials, including SCAP publications, specifications, and the SCAP Validation Program. National Vulnerability Database (NVD) has the largest repository of SCAP content and other SCAP standards-based vulnerability management data. Red Hat OVAL content repository contains OVAL definitions for vulnerabilities of RHEL systems. This is the recommended source of vulnerability content. MITRE CVE - This is a database of publicly known security vulnerabilities provided by the MITRE corporation. For RHEL, using OVAL CVE content provided by Red Hat is recommended. MITRE OVAL - This is an OVAL-related project provided by the MITRE corporation. Among other OVAL-related information, these pages contain the OVAL language and a repository of OVAL content with thousands of OVAL definitions. Note that for scanning RHEL, using OVAL CVE content provided by Red Hat is recommended. Managing security compliance in Red Hat Satellite - This set of guides describes, among other topics, how to maintain system security on multiple systems by using OpenSCAP. | [
"wget -O - https://www.redhat.com/security/data/oval/v2/RHEL8/rhel-8.oval.xml.bz2 | bzip2 --decompress > rhel-8.oval.xml",
"oscap oval eval --report vulnerability.html rhel-8.oval.xml",
"firefox vulnerability.html &",
"wget -O - https://www.redhat.com/security/data/oval/v2/RHEL8/rhel-8.oval.xml.bz2 | bzip2 --decompress > rhel-8.oval.xml",
"oscap-ssh <username> @ <hostname> <port> oval eval --report <scan-report.html> rhel-8.oval.xml",
"Data stream ├── xccdf | ├── benchmark | ├── profile | | ├──rule reference | | └──variable | ├── rule | ├── human readable data | ├── oval reference ├── oval ├── ocil reference ├── ocil ├── cpe reference └── cpe └── remediation",
"ls /usr/share/xml/scap/ssg/content/ ssg-firefox-cpe-dictionary.xml ssg-rhel6-ocil.xml ssg-firefox-cpe-oval.xml ssg-rhel6-oval.xml ... ssg-rhel6-ds-1.2.xml ssg-rhel8-oval.xml ssg-rhel8-ds.xml ssg-rhel8-xccdf.xml ...",
"oscap info /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml Profiles: ... Title: Health Insurance Portability and Accountability Act (HIPAA) Id: xccdf_org.ssgproject.content_profile_hipaa Title: PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 8 Id: xccdf_org.ssgproject.content_profile_pci-dss Title: OSPP - Protection Profile for General Purpose Operating Systems Id: xccdf_org.ssgproject.content_profile_ospp ...",
"oscap info --profile hipaa /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml ... Profile Title: Health Insurance Portability and Accountability Act (HIPAA) Description: The HIPAA Security Rule establishes U.S. national standards to protect individuals' electronic personal health information that is created, received, used, or maintained by a covered entity. ...",
"oscap xccdf eval --report <scan-report.html> --profile <profileID> /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml",
"oscap-ssh <username> @ <hostname> <port> xccdf eval --report <scan-report.html> --profile <profileID> /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml",
"oscap xccdf eval --profile <profileID> --remediate /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml",
"oscap xccdf eval --report <scan-report.html> --profile <profileID> /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml",
"ansible-playbook -i localhost, -c local /usr/share/scap-security-guide/ansible/rhel8-playbook-hipaa.yml",
"oscap xccdf eval --profile hipaa --report <scan-report.html> /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml",
"oscap xccdf eval --profile hipaa --results <hipaa-results.xml> /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml",
"oscap info <hipaa-results.xml>",
"oscap xccdf generate fix --fix-type ansible --result-id <xccdf_org.open-scap_testresult_xccdf_org.ssgproject.content_profile_hipaa> --output <hipaa-remediations.yml> <hipaa-results.xml>",
"oscap xccdf eval --profile hipaa --results <hipaa-results.xml> /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml",
"oscap info <hipaa-results.xml>",
"oscap xccdf generate fix --fix-type bash --result-id <xccdf_org.open-scap_testresult_xccdf_org.ssgproject.content_profile_hipaa> --output <hipaa-remediations.sh> <hipaa-results.xml>",
"scap-workbench &",
"oscap xccdf eval --profile ospp --report eval_postinstall_report.html /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml",
"oscap xccdf eval --profile ospp --report eval_postinstall_report.html /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml",
"wget -O - https://www.redhat.com/security/data/oval/v2/RHEL8/rhel-8.oval.xml.bz2 | bzip2 --decompress > rhel-8.oval.xml",
"podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/ubi latest 096cae65a207 7 weeks ago 239 MB",
"oscap-podman 096cae65a207 oval eval --report vulnerability.html rhel-8.oval.xml",
"firefox vulnerability.html &",
"oscap-podman <ID> xccdf eval --report <scan-report.html> --profile <profileID> /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml",
"firefox <scan-report.html> &"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/security_hardening/scanning-the-system-for-configuration-compliance-and-vulnerabilities_security-hardening |
Chapter 5. Compatibility Level Versions | Chapter 5. Compatibility Level Versions Each host connected to Red Hat Virtualization Manager contains a version of VDSM . VDSM is the agent within the virtualization infrastructure that runs on a hypervisor or host and provides local management for virtual machines, networks and storage. Red Hat Virtualization Manager controls hypervisors and hosts using current or earlier versions of VDSM. The Manager migrates virtual machines from host to host within a cluster. This means the Manager excludes certain features from a current version of VDSM until all hosts within a cluster have the same VDSM version, or more recent, installed. The API represents this concept as a compatibility level for each host, corresponding to the version of VDSM installed. A version element contains major and minor attributes, which describe the compatibility level. When an administrator upgrades all hosts within a cluster to a certain level, the version level appears under a supported_versions element. This indicates the cluster's version is now updatable to that level. Once the administrator updates all clusters within a data center to a given level, the data center is updatable to that level. 5.1. Upgrading Compatibility Levels Example 5.1. Upgrading compatibility levels The API reports the following compatibility levels for Red Hat Enterprise Virtualization Manager 3.4 instance: All hosts within a cluster are updated to VDSM 3.5 and the API reports: The cluster is now updatable to 3.5 . When the cluster is updated, the API reports: The API user updates the data center to 3.5 . Once upgraded, the API exposes features available in Red Hat Enterprise Virtualization 3.5 for this data center. | [
"<host ...> <version major=\"4\" minor=\"14\" build=\"11\" revision=\"0\" full_version=\"vdsm-4.14.11-5.el6ev\"/> </host> <cluster ...> <version major=\"3\" minor=\"4\"/> </cluster> <data_center ...> <version major=\"3\" minor=\"4\"/> </supported_versions> </data_center>",
"<host ...> <version major=\"4\" minor=\"16\" build=\"7\" revision=\"4\" full_version=\"vdsm-4.16.7.4-1.el6ev\"/> </host> <cluster ...> <version major=\"3\" minor=\"4\"/> <supported_versions> <version major=\"3\" minor=\"5\"/> </supported_versions> </cluster> <data_center ...> <version major=\"3\" minor=\"4\"/> <supported_versions/> </data_center>",
"<cluster ...> <version major=\"3\" minor=\"5\"/> <supported_versions/> </cluster> <data_center ...> <version major=\"3\" minor=\"4\"/> <supported_versions> <version major=\"3\" minor=\"5\"/> </supported_versions> </data_center>"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/chap-compatibility_level_versions |
5.9. Configuring Fencing Levels | 5.9. Configuring Fencing Levels Pacemaker supports fencing nodes with multiple devices through a feature called fencing topologies. To implement topologies, create the individual devices as you normally would and then define one or more fencing levels in the fencing topology section in the configuration. Each level is attempted in ascending numeric order, starting at 1. If a device fails, processing terminates for the current level. No further devices in that level are exercised and the level is attempted instead. If all devices are successfully fenced, then that level has succeeded and no other levels are tried. The operation is finished when a level has passed (success), or all levels have been attempted (failed). Use the following command to add a fencing level to a node. The devices are given as a comma-separated list of stonith ids, which are attempted for the node at that level. The following command lists all of the fencing levels that are currently configured. In the following example, there are two fence devices configured for node rh7-2 : an ilo fence device called my_ilo and an apc fence device called my_apc . These commands sets up fence levels so that if the device my_ilo fails and is unable to fence the node, then Pacemaker will attempt to use the device my_apc . This example also shows the output of the pcs stonith level command after the levels are configured. The following command removes the fence level for the specified node and devices. If no nodes or devices are specified then the fence level you specify is removed from all nodes. The following command clears the fence levels on the specified node or stonith id. If you do not specify a node or stonith id, all fence levels are cleared. If you specify more than one stonith id, they must be separated by a comma and no spaces, as in the following example. The following command verifies that all fence devices and nodes specified in fence levels exist. As of Red Hat Enterprise Linux 7.4, you can specify nodes in fencing topology by a regular expression applied on a node name and by a node attribute and its value. For example, the following commands configure nodes node1 , node2 , and ` node3 to use fence devices apc1 and ` apc2 , and nodes ` node4 , node5 , and ` node6 to use fence devices apc3 and ` apc4 . The following commands yield the same results by using node attribute matching. | [
"pcs stonith level add level node devices",
"pcs stonith level",
"pcs stonith level add 1 rh7-2 my_ilo pcs stonith level add 2 rh7-2 my_apc pcs stonith level Node: rh7-2 Level 1 - my_ilo Level 2 - my_apc",
"pcs stonith level remove level [ node_id ] [ stonith_id ] ... [ stonith_id ]",
"pcs stonith level clear [ node | stonith_id (s)]",
"pcs stonith level clear dev_a,dev_b",
"pcs stonith level verify",
"pcs stonith level add 1 \"regexp%node[1-3]\" apc1,apc2 pcs stonith level add 1 \"regexp%node[4-6]\" apc3,apc4",
"pcs node attribute node1 rack=1 pcs node attribute node2 rack=1 pcs node attribute node3 rack=1 pcs node attribute node4 rack=2 pcs node attribute node5 rack=2 pcs node attribute node6 rack=2 pcs stonith level add 1 attrib%rack=1 apc1,apc2 pcs stonith level add 1 attrib%rack=2 apc3,apc4"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-fencelevels-haar |
Chapter 10. Overriding Core Backend Service Configuration | Chapter 10. Overriding Core Backend Service Configuration The Red Hat Developer Hub (RHDH) backend platform consists of a number of core services that are well encapsulated. The RHDH backend installs these default core services statically during initialization. You can configure these core services by customizing the backend source code and rebuilding your Developer Hub application. Alternatively, you can customize a core service by installing it as a BackendFeature by using dynamic plugin functionality. To use the dynamic plugin functionality to customize a core service in your RHDH application, you must configure the backend to avoid statically installing a given default core service. For example, adding a middleware function to handle all incoming requests can be done by installing a custom configure function for the root HTTP router backend service which allows access to the underlying Express application. Example of a BackendFeature middleware function to handle incoming HTTP requests // Create the BackendFeature export const customRootHttpServerFactory: BackendFeature = rootHttpRouterServiceFactory({ configure: ({ app, routes, middleware, logger }) => { logger.info( 'Using custom root HttpRouterServiceFactory configure function', ); app.use(middleware.helmet()); app.use(middleware.cors()); app.use(middleware.compression()); app.use(middleware.logging()); // Add a the custom middleware function before all // of the route handlers app.use(addTestHeaderMiddleware({ logger })); app.use(routes); app.use(middleware.notFound()); app.use(middleware.error()); }, }); // Export the BackendFeature as the default entrypoint export default customRootHttpServerFactory; In the above example, as the BackendFeature overrides the default implementation of the HTTP router service, you must set the ENABLE_CORE_ROOTHTTPROUTER_OVERRIDE environment variable to true so that the Developer Hub does not install the default implementation automatically. 10.1. Overriding environment variables To allow a dynamic plugin to load a core service override, you must start the Developer Hub backend with the corresponding core service ID environment variable set to true . Table 10.1. Environment variables and core service IDs Variable Description ENABLE_CORE_AUTH_OVERRIDE Override the core.auth service ENABLE_CORE_CACHE_OVERRIDE Override the core.cache service ENABLE_CORE_ROOTCONFIG_OVERRIDE Override the core.rootConfig service ENABLE_CORE_DATABASE_OVERRIDE Override the core.database service ENABLE_CORE_DISCOVERY_OVERRIDE Override the core.discovery service ENABLE_CORE_HTTPAUTH_OVERRIDE Override the core.httpAuth service ENABLE_CORE_HTTPROUTER_OVERRIDE Override the core.httpRouter service ENABLE_CORE_LIFECYCLE_OVERRIDE Override the core.lifecycle service ENABLE_CORE_LOGGER_OVERRIDE Override the core.logger service ENABLE_CORE_PERMISSIONS_OVERRIDE Override the core.permissions service ENABLE_CORE_ROOTHEALTH_OVERRIDE Override the core.rootHealth service ENABLE_CORE_ROOTHTTPROUTER_OVERRIDE Override the core.rootHttpRouter service ENABLE_CORE_ROOTLIFECYCLE_OVERRIDE Override the core.rootLifecycle service ENABLE_CORE_SCHEDULER_OVERRIDE Override the core.scheduler service ENABLE_CORE_USERINFO_OVERRIDE Override the core.userInfo service ENABLE_CORE_URLREADER_OVERRIDE Override the core.urlReader service ENABLE_EVENTS_SERVICE_OVERRIDE Override the events.service service | [
"// Create the BackendFeature export const customRootHttpServerFactory: BackendFeature = rootHttpRouterServiceFactory({ configure: ({ app, routes, middleware, logger }) => { logger.info( 'Using custom root HttpRouterServiceFactory configure function', ); app.use(middleware.helmet()); app.use(middleware.cors()); app.use(middleware.compression()); app.use(middleware.logging()); // Add a the custom middleware function before all // of the route handlers app.use(addTestHeaderMiddleware({ logger })); app.use(routes); app.use(middleware.notFound()); app.use(middleware.error()); }, }); // Export the BackendFeature as the default entrypoint export default customRootHttpServerFactory;"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/configuring_dynamic_plugins/overriding-core-backend-services_title-plugins-rhdh-configure |
Appendix A. Using your subscription | Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 6 - Registering the system and managing subscriptions Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_javascript_client/using_your_subscription |
B.45.3. RHBA-2011:0446 - libvirt bug fix update | B.45.3. RHBA-2011:0446 - libvirt bug fix update Updated libvirt packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. In addition, libvirt provides tools for remotely managing virtualized systems. Bug Fixes BZ# 656355 When a root-squashing export of a domain was owned by a group to which the qemu user belonged, but was not owned by the qemu user, libvirt could not create a file to save the domain's state. This was because the save operation was invoked by the user who did not have the needed group permissions. With this update, libvirt first acquires all the needed group permissions and only then attempts to perform the aforementioned save operation. BZ# 656972 Members of the qemu group did not have read/write permissions for the "[localstatedir]/[cache/lib]/libvirt/qemu/" directory in which XML files which define sockets are placed. Permissions are now updated to allow the qemu group read/write permissions. BZ# 658141 A race condition where an application could query block information on a virtual guest that had just been migrated away could occur when migrating a guest. As a result, the libvirt service crashed. The libvirt application now verifies that a guest exists before attempting to start any monitoring operations. BZ# 658143 Live migration of a guest could take an exceptionally long time to converge to the switchover point if the guest was very busy. By allowing to increase the downtime setting of a guest, migration is more likely to complete. However, libvirt was sending an incorrectly formatted request to increase the downtime setting of a guest. With this update, libvirt correctly sends the downtime setting request. BZ# 658144 The "addrToString" methods did not work properly with UNIX domain sockets which did not have a normal "host:port" address. As a result SASL (Simple Authentication and Security Layer) could not be used over UNIX domain sockets. With this update, the "addrToString" methods are fixed and SASL is no longer restricted to TCP connections. BZ# 662042 Prior to this update, libvirt was not able to recognize whether a domain crashed or was properly shut down. With this update, a SHUTDOWN event sent by qemu is recognized by libvirt when a domain is properly shut down. If the SHUTDOWN event is not received, the domain is declared to have crashed. BZ# 662043 A deadlock occurred in the libvirt service when running concurrent bidirectional migration because certain calls did not release their local driver lock before issuing an RPC (Remote Procedure Call) call on a remote libvirt daemon. A deadlock no longer occurs between two communicating libvirt daemons. BZ# 662045 A specification file bug caused permissions on the /var/lib/libvirt directory to change when upgrading a system. With this update, correct permissions are assigned to the aforementioned directory. BZ# 662046 An off-by-one error in a clock variable caused a virtual guest to show incorrect date and time information. This update addresses this error. Date and time information is now correctly displayed. BZ# 668694 The %post script (part of the libvirt-client package) started the libvirt-guests service even when the service was explicitly turned off. With this update, the libvirt-guests service is no longer started when explicitly turned off. BZ# 672549 Starting and shutting down a domain led to a memory leak due to the memory buffer not being freed properly. With this update, starting and shutting down a domain no longer leads to a memory leak. BZ# 672554 Starting and shutting down a domain led to a memory leak due to the use of a thread-unfriendly "matchpathcon" (which gets the default security context for the specified path) SELinux API. With this update, libvirt uses improved SELinux APIs and a memory leak no longer occurs. All users of libvirt are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhba-2011-0446 |
7.5. Configuring System Memory Capacity | 7.5. Configuring System Memory Capacity This section discusses memory-related kernel parameters that may be useful in improving memory utilization on your system. These parameters can be temporarily set for testing purposes by altering the value of the corresponding file in the /proc file system. Once you have determined the values that produce optimal performance for your use case, you can set them permanently by using the sysctl command. Memory usage is typically configured by setting the value of one or more kernel parameters. These parameters can be set temporarily by altering the contents of files in the /proc file system, or they can be set persistently with the sysctl tool, which is provided by the procps-ng package. For example, to set the overcommit_memory parameter to 1 temporarily, run the following command: To set this value persistently, add sysctl vm.overcommit_memory=1 in /etc/sysctl.conf then run the following command: Setting a parameter temporarily is useful for determining the effect the parameter has on your system. You can then set the parameter persistently when you are sure that the parameter's value has the desired effect. Note To expand your expertise, you might also be interested in the Red Hat Enterprise Linux Performance Tuning (RH442) training course. 7.5.1. Virtual Memory Parameters The parameters listed in this section are located in /proc/sys/vm unless otherwise indicated. dirty_ratio A percentage value. When this percentage of total system memory is modified, the system begins writing the modifications to disk with the pdflush operation. The default value is 20 percent. dirty_background_ratio A percentage value. When this percentage of total system memory is modified, the system begins writing the modifications to disk in the background. The default value is 10 percent. overcommit_memory Defines the conditions that determine whether a large memory request is accepted or denied. The default value is 0 . By default, the kernel performs heuristic memory overcommit handling by estimating the amount of memory available and failing requests that are too large. However, since memory is allocated using a heuristic rather than a precise algorithm, overloading memory is possible with this setting. When this parameter is set to 1 , the kernel performs no memory overcommit handling. This increases the possibility of memory overload, but improves performance for memory-intensive tasks. When this parameter is set to 2 , the kernel denies requests for memory equal to or larger than the sum of total available swap space and the percentage of physical RAM specified in overcommit_ratio . This reduces the risk of overcommitting memory, but is recommended only for systems with swap areas larger than their physical memory. overcommit_ratio Specifies the percentage of physical RAM considered when overcommit_memory is set to 2 . The default value is 50 . max_map_count Defines the maximum number of memory map areas that a process can use. The default value ( 65530 ) is appropriate for most cases. Increase this value if your application needs to map more than this number of files. min_free_kbytes Specifies the minimum number of kilobytes to keep free across the system. This is used to determine an appropriate value for each low memory zone, each of which is assigned a number of reserved free pages in proportion to their size. Warning Extreme values can damage your system. Setting min_free_kbytes to an extremely low value prevents the system from reclaiming memory, which can result in system hangs and OOM-killing processes. However, setting min_free_kbytes too high (for example, to 5-10% of total system memory) causes the system to enter an out-of-memory state immediately, resulting in the system spending too much time reclaiming memory. oom_adj In the event that the system runs out of memory and the panic_on_oom parameter is set to 0 , the oom_killer function kills processes until the system can recover, starting from the process with the highest oom_score . The oom_adj parameter helps determine the oom_score of a process. This parameter is set per process identifier. A value of -17 disables the oom_killer for that process. Other valid values are from -16 to 15 . Note Processes spawned by an adjusted process inherit the oom_score of the process. swappiness The swappiness value, ranging from 0 to 100 , controls the degree to which the system favors anonymous memory or the page cache. A high value improves file-system performance while aggressively swapping less active processes out of RAM. A low value avoids swapping processes out of memory, which usually decreases latency at the cost of I/O performance. The default value is 60 . Warning Setting swappiness==0 will very aggressively avoids swapping out, which increase the risk of OOM killing under strong memory and I/O pressure. 7.5.2. File System Parameters Parameters listed in this section are located in /proc/sys/fs unless otherwise indicated. aio-max-nr Defines the maximum allowed number of events in all active asynchronous input/output contexts. The default value is 65536 . Modifying this value does not pre-allocate or resize any kernel data structures. file-max Determines the maximum number of file handles for the entire system. The default value on Red Hat Enterprise Linux 7 is the maximum of either 8192 , or one tenth of the free memory pages available at the time the kernel starts. Raising this value can resolve errors caused by a lack of available file handles. 7.5.3. Kernel Parameters Default values for the following parameters, located in the /proc/sys/kernel/ directory, can be calculated by the kernel at boot time depending on available system resources. msgmax Defines the maximum allowable size in bytes of any single message in a message queue. This value must not exceed the size of the queue ( msgmnb ). To determine the current msgmax value on your system, use: msgmnb Defines the maximum size in bytes of a single message queue. To determine the current msgmnb value on your system, use: msgmni Defines the maximum number of message queue identifiers, and therefore the maximum number of queues. To determine the current msgmni value on your system, use: shmall Defines the total amount of shared memory pages that can be used on the system at one time. A page is 4096 bytes on the AMD64 and Intel 64 architecture, for example. To determine the current shmall value on your system, use: shmmax Defines the maximum size (in bytes) of a single shared memory segment allowed by the kernel. To determine the current shmmax value on your system, use: shmmni Defines the system-wide maximum number of shared memory segments. The default value is 4096 on all systems. threads-max Defines the system-wide maximum number of threads available to the kernel at one time. To determine the current threads-max value on your system, use: The default value is the result of: The minimum value is 20 . | [
"echo 1 > /proc/sys/vm/overcommit_memory",
"sysctl -p",
"sysctl kernel.msgmax",
"sysctl kernel.msgmnb",
"sysctl kernel.msgmni",
"sysctl kernel.shmall",
"sysctl kernel.shmmax",
"sysctl kernel.threads-max",
"mempages / (8 * THREAD_SIZE / PAGE SIZE )"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Configuration_tools-Configuring_system_memory_capacity |
Providing feedback on Red Hat build of Quarkus documentation | Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/openid_connect_oidc_authentication/proc_providing-feedback-on-red-hat-documentation_security-oidc-authentication |
Configuring the Bare Metal Provisioning service | Configuring the Bare Metal Provisioning service Red Hat OpenStack Platform 17.1 Installing and configuring the Bare Metal Provisioning service (ironic) for Bare Metal as a Service (BMaaS) OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_the_bare_metal_provisioning_service/index |
Chapter 20. Installing an IdM replica | Chapter 20. Installing an IdM replica The following sections describe how to install an Identity Management (IdM) replica interactively, by using the command line (CLI). The replica installation process copies the configuration of the existing server and installs the replica based on that configuration. Note See Installing an Identity Management server using an Ansible playbook . Use Ansible roles to consistently install and customize multiple replicas. Interactive and non-interactive methods that do not use Ansible are useful in topologies where, for example, the replica preparation is delegated to a user or a third party. You can also use these methods in geographically distributed topologies where you do not have access from the Ansible controller node. Prerequisites You are installing one IdM replica at a time. The installation of multiple replicas at the same time is not supported. Ensure your system is prepared for IdM replica installation . Important If this preparation is not performed, installing an IdM replica will fail. 20.1. Installing an IdM replica with integrated DNS and a CA Follow this procedure to install an Identity Management (IdM) replica: With integrated DNS With a certificate authority (CA) You can do this to, for example, replicate the CA service for resiliency after installing an IdM server with an integrated CA. Important When configuring a replica with a CA, the CA configuration of the replica must mirror the CA configuration of the other server. For example, if the server includes an integrated IdM CA as the root CA, the new replica must also be installed with an integrated CA as the root CA. No other CA configuration is available in this case. Including the --setup-ca option in the ipa-replica-install command copies the CA configuration of the initial server. Prerequisites Ensure your system is prepared for an IdM replica installation . Procedure Enter ipa-replica-install with these options: --setup-dns to configure the replica as a DNS server --forwarder to specify a per-server forwarder, or --no-forwarder if you do not want to use any per-server forwarders. To specify multiple per-server forwarders for failover reasons, use --forwarder multiple times. Note The ipa-replica-install utility accepts a number of other options related to DNS settings, such as --no-reverse or --no-host-dns . For more information about them, see the ipa-replica-install (1) man page. --setup-ca to include a CA on the replica For example, to set up a replica with an integrated DNS server and a CA that forwards all DNS requests not managed by the IdM servers to the DNS server running on IP 192.0.2.1: After the installation completes, add a DNS delegation from the parent domain to the IdM DNS domain. For example, if the IdM DNS domain is idm.example.com , add a name server (NS) record to the example.com parent domain. Important Repeat this step each time after you install an IdM DNS server. 20.2. Installing an IdM replica with integrated DNS and no CA Follow this procedure to install an Identity Management (IdM) replica: With integrated DNS Without a certificate authority (CA) in an IdM environment in which a CA is already installed. The replica will forward all certificate operations to the IdM server with a CA installed. Prerequisites Ensure your system is prepared for an IdM replica installation . Procedure Enter ipa-replica-install with these options: --setup-dns to configure the replica as a DNS server --forwarder to specify a per-server forwarder, or --no-forwarder if you do not want to use any per-server forwarders. To specify multiple per-server forwarders for failover reasons, use --forwarder multiple times. For example, to set up a replica with an integrated DNS server that forwards all DNS requests not managed by the IdM servers to the DNS server running on IP 192.0.2.1: Note The ipa-replica-install utility accepts a number of other options related to DNS settings, such as --no-reverse or --no-host-dns . For more information about them, see the ipa-replica-install (1) man page. After the installation completes, add a DNS delegation from the parent domain to the IdM DNS domain. For example, if the IdM DNS domain is idm.example.com , add a name server (NS) record to the example.com parent domain. Important Repeat this step each time after you install an IdM DNS server. 20.3. Installing an IdM replica without integrated DNS and with a CA Follow this procedure to install an Identity Management (IdM) replica: Without integrated DNS With a certificate authority (CA) Important When configuring a replica with a CA, the CA configuration of the replica must mirror the CA configuration of the other server. For example, if the server includes an integrated IdM CA as the root CA, the new replica must also be installed with an integrated CA as the root CA. No other CA configuration is available in this case. Including the --setup-ca option in the ipa-replica-install command copies the CA configuration of the initial server. Prerequisites Ensure your system is prepared for an IdM replica installation . Procedure Enter ipa-replica-install with the --setup-ca option. Add the newly created IdM DNS service records to your DNS server: Export the IdM DNS service records into a file in the nsupdate format: Submit a DNS update request to your DNS server using the nsupdate utility and the dns_records_file.nsupdate file. For more information, see Updating External DNS Records Using nsupdate in RHEL 7 documentation. Alternatively, refer to your DNS server documentation for adding DNS records. 20.4. Installing an IdM replica without integrated DNS and without a CA Follow this procedure to install an Identity Management (IdM) replica: Without integrated DNS Without a certificate authority (CA) by providing the required certificates manually. The assumption here is that the first server was installed without a CA. Important You cannot install a server or replica using self-signed third-party server certificates because the imported certificate files must contain the full CA certificate chain of the CA that issued the LDAP and Apache server certificates. Prerequisites Ensure your system is prepared for an IdM replica installation . Procedure Enter ipa-replica-install , and provide the required certificate files by adding these options: --dirsrv-cert-file --dirsrv-pin --http-cert-file --http-pin For details about the files that are provided using these options, see Section 4.1, "Certificates required to install an IdM server without a CA" . For example: Note Do not add the --ca-cert-file option. The ipa-replica-install utility takes this part of the certificate information automatically from the first server you installed. 20.5. Installing an IdM hidden replica A hidden (unadvertised) replica is an Identity Management (IdM) server that has all services running and available. However, it has no SRV records in DNS, and LDAP server roles are not enabled. Therefore, clients cannot use service discovery to detect these hidden replicas. For further details about hidden replicas, see The hidden replica mode . Prerequisites Ensure your system is prepared for an IdM replica installation . Procedure To install a hidden replica, use the following command: Note that the command installs a replica without DNS SRV records and with disabled LDAP server roles. You can also change the mode of existing replica to hidden. For details, see Demotion and promotion of hidden replicas 20.6. Testing an IdM replica After creating a replica, check if the replica replicates data as expected. You can use the following procedure. Procedure Create a user on the new replica: Make sure the user is visible on another replica: 20.7. Connections performed during an IdM replica installation Requests performed during an IdM replica installation lists the operations performed by ipa-replica-install , the Identity Management (IdM) replica installation tool. Table 20.1. Requests performed during an IdM replica installation Operation Protocol used Purpose DNS resolution against the DNS resolvers configured on the client system DNS To discover the IP addresses of IdM servers Requests to ports 88 (TCP/TCP6 and UDP/UDP6) on the discovered IdM servers Kerberos To obtain a Kerberos ticket JSON-RPC calls to the IdM Apache-based web-service on the discovered or configured IdM servers HTTPS IdM client enrollment; replica keys retrieval and certificate issuance if required Requests over TCP/TCP6 to port 389 on the IdM server, using SASL GSSAPI authentication, plain LDAP, or both LDAP IdM client enrollment; CA certificate chain retrieval; LDAP data replication Requests over TCP/TCP6 to port 22 on IdM server SSH To check if the connection is working (optionally) Access over port 8443 (TCP/TCP6) on the IdM servers HTTPS To administer the Certificate Authority on the IdM server (only during IdM server and replica installation) | [
"ipa-replica-install --setup-dns --forwarder 192.0.2.1 --setup-ca",
"ipa-replica-install --setup-dns --forwarder 192.0.2.1",
"ipa-replica-install --setup-ca",
"ipa dns-update-system-records --dry-run --out dns_records_file.nsupdate",
"ipa-replica-install --dirsrv-cert-file /tmp/server.crt --dirsrv-cert-file /tmp/server.key --dirsrv-pin secret --http-cert-file /tmp/server.crt --http-cert-file /tmp/server.key --http-pin secret",
"ipa-replica-install --hidden-replica",
"[admin@new_replica ~]USD ipa user-add test_user",
"[admin@another_replica ~]USD ipa user-show test_user"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/installing-an-ipa-replica_installing-identity-management |
3.3. Installing guest agents and drivers | 3.3. Installing guest agents and drivers 3.3.1. Red Hat Virtualization Guest agents, tools, and drivers The Red Hat Virtualization guest agents, tools, and drivers provide additional functionality for virtual machines, such as gracefully shutting down or rebooting virtual machines from the VM Portal and Administration Portal. The tools and agents also provide information for virtual machines, including: Resource usage IP addresses The guest agents, tools and drivers are distributed as an ISO file that you can attach to virtual machines. This ISO file is packaged as an RPM file that you can install and upgrade from the Manager machine. You need to install the guest agents and drivers on a virtual machine to enable this functionality for that machine. Table 3.1. Red Hat Virtualization Guest drivers Driver Description Works on virtio-net Paravirtualized network driver provides enhanced performance over emulated devices like rtl. Server and Desktop. virtio-block Paravirtualized HDD driver offers increased I/O performance over emulated devices like IDE by optimizing the coordination and communication between the virtual machine and the hypervisor. The driver complements the software implementation of the virtio-device used by the host to play the role of a hardware device. Server and Desktop. virtio-scsi Paravirtualized iSCSI HDD driver offers similar functionality to the virtio-block device, with some additional enhancements. In particular, this driver supports adding hundreds of devices, and names devices using the standard SCSI device naming scheme. Server and Desktop. virtio-serial Virtio-serial provides support for multiple serial ports. The improved performance is used for fast communication between the virtual machine and the host that avoids network complications. This fast communication is required for the guest agents and for other features such as clipboard copy-paste between the virtual machine and the host and logging. Server and Desktop. virtio-balloon Virtio-balloon is used to control the amount of memory a virtual machine actually accesses. It offers improved memory overcommitment. Server and Desktop. qxl A paravirtualized display driver reduces CPU usage on the host and provides better performance through reduced network bandwidth on most workloads. Server and Desktop. Table 3.2. Red Hat Virtualization Guest agents and tools Guest agent/tool Description Works on qemu-guest-agent Used instead of ovirt-guest-agent-common on Red Hat Enterprise Linux 8 virtual machines. It is installed and enabled by default. Server and Desktop. spice-agent The SPICE agent supports multiple monitors and is responsible for client-mouse-mode support to provide a better user experience and improved responsiveness than the QEMU emulation. Cursor capture is not needed in client-mouse-mode. The SPICE agent reduces bandwidth usage when used over a wide area network by reducing the display level, including color depth, disabling wallpaper, font smoothing, and animation. The SPICE agent enables clipboard support allowing cut and paste operations for both text and images between client and virtual machine, and automatic guest display setting according to client-side settings. On Windows-based virtual machines, the SPICE agent consists of vdservice and vdagent. Server and Desktop. 3.3.2. Installing the guest agents, tools, and drivers on Windows Procedure To install the guest agents, tools, and drivers on a Windows virtual machine, complete the following steps: On the Manager machine, install the virtio-win package: # dnf install virtio-win* After you install the package, the ISO file is located in /usr/share/virtio-win/virtio-win _version .iso on the Manager machine. Upload virtio-win _version .iso to a data storage domain. See Uploading Images to a Data Storage Domain in the Administration Guide for details. In the Administration or VM Portal, if the virtual machine is running, use the Change CD button to attach the virtio-win _version .iso file to each of your virtual machines. If the virtual machine is powered off, click the Run Once button and attach the ISO as a CD. Log in to the virtual machine. Select the CD Drive containing the virtio-win _version .iso file. You can complete the installation with either the GUI or the command line. Run the installer. To install with the GUI, complete the following steps Double-click virtio-win-guest-tools.exe . Click at the welcome screen. Follow the prompts in the installation wizard. When installation is complete, select Yes, I want to restart my computer now and click Finish to apply the changes. To install silently with the command line, complete the following steps Open a command prompt with Administrator privileges. Enter the msiexec command: D:\ msiexec /i " PATH_TO_MSI " /qn [/l*v " PATH_TO_LOG "][/norestart] ADDLOCAL=ALL Other possible values for ADDLOCAL are listed below. For example, to run the installation when virtio-win-gt-x64.msi is on the D:\ drive, without saving the log, and then immediately restart the virtual machine, enter the following command: D:\ msiexec /i "virtio-win-gt-x64.msi" /qn ADDLOCAL=ALL After installation completes, the guest agents and drivers pass usage information to the Red Hat Virtualization Manager and enable you to access USB devices and other functionality. 3.3.3. Values for ADDLOCAL to customize virtio-win command-line installation When installing virtio-win-gt-x64.msi or virtio-win-gt-x32.msi with the command line, you can install any one driver, or any combination of drivers. You can also install specific agents, but you must also install each agent's corresponding drivers. The ADDLOCAL parameter of the msiexec command enables you to specify which drivers or agents to install. ADDLOCAL=ALL installs all drivers and agents. Other values are listed in the following tables. Table 3.3. Possible values for ADDLOCAL to install drivers Value for ADDLOCAL Driver Name Description FE_network_driver virtio-net Paravirtualized network driver provides enhanced performance over emulated devices like rtl. FE_balloon_driver virtio-balloon Controls the amount of memory a virtual machine actually accesses. It offers improved memory overcommitment. FE_pvpanic_driver pvpanic QEMU pvpanic device driver. FE_qemufwcfg_driver qemufwcfg QEMU FWCfg device driver. FE_qemupciserial_driver qemupciserial QEMU PCI serial device driver. FE_spice_driver Spice Driver A paravirtualized display driver reduces CPU usage on the host and provides better performance through reduced network bandwidth on most workloads. FE_vioinput_driver vioinput VirtIO Input Driver. FE_viorng_driver viorng VirtIO RNG device driver. FE_vioscsi_driver vioscsi VirtIO SCSI pass-through controller. FE_vioserial_driver vioserial VirtIO Serial device driver. FE_viostor_driver viostor VirtIO Block driver. Table 3.4. Possible values for ADDLOCAL to install agents and required corresponding drivers Agent Description Corresponding driver(s) Value for ADDLOCAL Spice Agent Supports multiple monitors, responsible for client-mouse-mode support, reduces bandwidth usage, enables clipboard support between client and virtual machine, provide a better user experience and improved responsiveness. vioserial and Spice driver FE_spice_Agent,FE_vioserial_driver,FE_spice_driver Examples The following command installs only the VirtIO SCSI pass-through controller, the VirtIO Serial device driver, and the VirtIO Block driver: D:\ msiexec /i "virtio-win-gt-x64.msi" /qn ADDLOCAL=`FE_vioscsi_driver,FE_vioserial_driver,FE_viostor_driver The following command installs only the Spice Agent and its required corresponding drivers: D:\ msiexec /i "virtio-win-gt-x64.msi" /qn ADDLOCAL = FE_spice_Agent,FE_vioserial_driver,FE_spice_driver Additional resources Updating Win Guest Drivers with Windows Updates Updating the Guest Agents and Drivers on Windows The Microsoft Developer website: Windows Installer Command-Line Options for the Windows installer Property Reference for the Windows installer | [
"dnf install virtio-win*",
"D:\\ msiexec /i \" PATH_TO_MSI \" /qn [/l*v \" PATH_TO_LOG \"][/norestart] ADDLOCAL=ALL",
"D:\\ msiexec /i \"virtio-win-gt-x64.msi\" /qn ADDLOCAL=ALL",
"D:\\ msiexec /i \"virtio-win-gt-x64.msi\" /qn ADDLOCAL=`FE_vioscsi_driver,FE_vioserial_driver,FE_viostor_driver",
"D:\\ msiexec /i \"virtio-win-gt-x64.msi\" /qn ADDLOCAL = FE_spice_Agent,FE_vioserial_driver,FE_spice_driver"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/installing_guest_agents_and_drivers_windows |
Console APIs | Console APIs OpenShift Container Platform 4.14 Reference guide for console APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/console_apis/index |
4.2. Deployment | 4.2. Deployment In Red Hat Enterprise Linux 6.1, multilib Python packages and packages dependent on them have been removed. This was because installing Python packages for multiple architectures on one system can cause various problems. For more information, refer to https://access.redhat.com/site/solutions/68140 . Some HP Proliant servers may report incorrect CPU frequency values in /proc/cpuinfo or /sys/device/system/cpu/*/cpufreq. This is due to the firmware manipulating the CPU frequency without providing any notification to the operating system. To avoid this ensure that the "HP Power Regulator" option in the BIOS is set to "OS Control". An alternative available on more recent systems is to set "Collaborative Power Control" to "Enabled". Some packages in the Optional repositories on RHN have multilib file conflicts. Consequently, these packages cannot have both the primary architecture (e.g. x86_64) and secondary architecture (e.g. i686) copies of the package installed on the same machine simultaneously. To work around this, install only one copy of the conflicting package. When rebuilding the grub package on the x86_64 architecture, the glibc-static.i686 package must be used. Using the glibc-static.x86_64 package will not meet the build requirements. Parted in Red Hat Enterprise Linux 6 cannot handle Extended Address Volumes (EAV) Direct Access Storage Devices (DASD) that have greater than 65535 cylinders. Consequently, EAV DASD drives cannot be partitioned using parted, and installation on EAV DASD drives will fail. To work around this issue, complete the installation on a non EAV DASD drive, then add the EAV device after installation using the tools provided in s390-utils. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/ar01s04s02 |
Security overview | Security overview Red Hat build of Quarkus 3.15 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/security_overview/index |
Appendix A. Ceph subsystems default logging level values | Appendix A. Ceph subsystems default logging level values A table of the default logging level values for the various Ceph subsystems. Subsystem Log Level Memory Level asok 1 5 auth 1 5 buffer 0 0 client 0 5 context 0 5 crush 1 5 default 0 5 filer 0 5 bluestore 1 5 finisher 1 5 heartbeatmap 1 5 javaclient 1 5 journaler 0 5 journal 1 5 lockdep 0 5 mds balancer 1 5 mds locker 1 5 mds log expire 1 5 mds log 1 5 mds migrator 1 5 mds 1 5 monc 0 5 mon 1 5 ms 0 5 objclass 0 5 objectcacher 0 5 objecter 0 0 optracker 0 5 osd 0 5 paxos 0 5 perfcounter 1 5 rados 0 5 rbd 0 5 rgw 1 5 throttle 1 5 timer 0 5 tp 0 5 | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/troubleshooting_guide/ceph-subsystems-default-logging-level-values_diag |
Chapter 13. Accessing third-party monitoring APIs | Chapter 13. Accessing third-party monitoring APIs In OpenShift Container Platform 4.11, you can access web service APIs for some third-party monitoring components from the command line interface (CLI). 13.1. Accessing third-party monitoring web service APIs You can directly access third-party web service APIs from the command line for the following monitoring stack components: Prometheus, Alertmanager, Thanos Ruler, and Thanos Querier. The following example commands show how to query the service API receivers for Alertmanager. This example requires that the associated user account be bound against the monitoring-alertmanager-edit role in the openshift-monitoring namespace and that the account has the privilege to view the route. This access only supports using a Bearer Token for authentication. USD oc login -u <username> -p <password> USD host=USD(oc -n openshift-monitoring get route alertmanager-main -ojsonpath={.spec.host}) USD token=USD(oc whoami -t) USD curl -H "Authorization: Bearer USDtoken" -k "https://USDhost/api/v2/receivers" Note To access Thanos Ruler and Thanos Querier service APIs, the requesting account must have get permission on the namespaces resource, which can be done by granting the cluster-monitoring-view cluster role to the account. 13.2. Querying metrics by using the federation endpoint for Prometheus You can use the federation endpoint to scrape platform and user-defined metrics from a network location outside the cluster. To do so, access the Prometheus /federate endpoint for the cluster via an OpenShift Container Platform route. Warning A delay in retrieving metrics data occurs when you use federation. This delay can affect the accuracy and timeliness of the scraped metrics. Using the federation endpoint can also degrade the performance and scalability of your cluster, especially if you use the federation endpoint to retrieve large amounts of metrics data. To avoid these issues, follow these recommendations: Do not try to retrieve all metrics data via the federation endpoint. Query it only when you want to retrieve a limited, aggregated data set. For example, retrieving fewer than 1,000 samples for each request helps minimize the risk of performance degradation. Avoid querying the federation endpoint frequently. Limit queries to a maximum of one every 30 seconds. If you need to forward large amounts of data outside the cluster, use remote write instead. For more information, see the Configuring remote write storage section. Prerequisites You have installed the OpenShift CLI ( oc ). You have obtained the host URL for the OpenShift Container Platform route. You have access to the cluster as a user with the cluster-monitoring-view cluster role or have obtained a bearer token with get permission on the namespaces resource. Note You can only use bearer token authentication to access the federation endpoint. Procedure Retrieve the bearer token: USD token=`oc whoami -t` Query metrics from the /federate route. The following example queries up metrics: USD curl -G -s -k -H "Authorization: Bearer USDtoken" \ 'https:/<federation_host>/federate' \ 1 --data-urlencode 'match[]=up' 1 For <federation_host>, substitute the host URL for the federation route. Example output # TYPE up untyped up{apiserver="kube-apiserver",endpoint="https",instance="10.0.143.148:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035322214 up{apiserver="kube-apiserver",endpoint="https",instance="10.0.148.166:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035338597 up{apiserver="kube-apiserver",endpoint="https",instance="10.0.173.16:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035343834 ... 13.3. Additional resources Configuring remote write storage Managing metrics Managing alerts | [
"oc login -u <username> -p <password>",
"host=USD(oc -n openshift-monitoring get route alertmanager-main -ojsonpath={.spec.host})",
"token=USD(oc whoami -t)",
"curl -H \"Authorization: Bearer USDtoken\" -k \"https://USDhost/api/v2/receivers\"",
"token=`oc whoami -t`",
"curl -G -s -k -H \"Authorization: Bearer USDtoken\" 'https:/<federation_host>/federate' \\ 1 --data-urlencode 'match[]=up'",
"TYPE up untyped up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.143.148:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035322214 up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.148.166:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035338597 up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.173.16:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035343834"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/monitoring/accessing-third-party-monitoring-apis |
A.12. VDSM Hook Return Codes | A.12. VDSM Hook Return Codes Hook scripts must return one of the return codes shown in Table A.3, "Hook Return Codes" . The return code will determine whether further hook scripts are processed by VDSM. Table A.3. Hook Return Codes Code Description 0 The hook script ended successfully 1 The hook script failed, other hooks should be processed 2 The hook script failed, no further hooks should be processed >2 Reserved | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/VDSM_hooks_return_codes |
Chapter 5. Component Versions | Chapter 5. Component Versions 5.1. Component Versions The full list of component versions used in Red Hat JBoss Data Grid is available at the Customer Portal at https://access.redhat.com/site/articles/488833 . Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.0_release_notes/chap-component_versions |
Chapter 3. Upgrading from Red Hat Hyperconverged Infrastructure for Virtualization 1.5 and 1.6 | Chapter 3. Upgrading from Red Hat Hyperconverged Infrastructure for Virtualization 1.5 and 1.6 3.1. Upgrade workflow To upgrade to Red Hat Hyperconverged Infrastructure for Virtualization 1.8, the primary requirement is to upgrade to Red Hat Hyperconverged Infrastructure for Virtualization 1.7 with the latest version of Red Hat Virtualization 4.3. The upgrade process for RHHI for Virtualization versions 1.5, 1.6, and 1.7 to RHHI for Virtualization 1.8 is as follows: RHHI for Virtualization 1.5 (based on RHV 4.2) Perform the upgrade from 1.5 to 1.7 and then upgrade to RHHI for Virtualization 1.8. RHHI for Virtualization 1.6 (based on RHV 4.3) Perform the upgrade from 1.6 to 1.7 and then upgrade to RHHI for Virtualization 1.8. RHHI for Virtualization 1.7 (based on RHV 4.3.8 or later) Update the current set-up to latest Red Hat Virtualization 4.3, then upgrade to RHHI for Virtualization 1.8. 3.2. Upgrading to Red Hat Hyperconverged Infrastructure for Virtualization 1.7 Follow Upgrading to RHHI for Virtualization 1.7 guide to upgrade from RHHI for Virtualization 1.5, 1.6 to 1.7 and to upgrade RHHI for Virtualization 1.7 to the latest Red Hat Virtualization 4.3.z version. | null | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/upgrading_red_hat_hyperconverged_infrastructure_for_virtualization/upgrading-to-rhhi-v17-180 |
Chapter 3. ClusterServiceVersion [operators.coreos.com/v1alpha1] | Chapter 3. ClusterServiceVersion [operators.coreos.com/v1alpha1] Description ClusterServiceVersion is a Custom Resource of type ClusterServiceVersionSpec . Type object Required metadata spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ClusterServiceVersionSpec declarations tell OLM how to install an operator that can manage apps for a given version. status object ClusterServiceVersionStatus represents information about the status of a CSV. Status may trail the actual state of a system. 3.1.1. .spec Description ClusterServiceVersionSpec declarations tell OLM how to install an operator that can manage apps for a given version. Type object Required displayName install Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. apiservicedefinitions object APIServiceDefinitions declares all of the extension apis managed or required by an operator being ran by ClusterServiceVersion. cleanup object Cleanup specifies the cleanup behaviour when the CSV gets deleted customresourcedefinitions object CustomResourceDefinitions declares all of the CRDs managed or required by an operator being ran by ClusterServiceVersion. If the CRD is present in the Owned list, it is implicitly required. description string Description of the operator. Can include the features, limitations or use-cases of the operator. displayName string The name of the operator in display format. icon array The icon for this operator. icon[] object install object NamedInstallStrategy represents the block of an ClusterServiceVersion resource where the install strategy is specified. installModes array InstallModes specify supported installation types installModes[] object InstallMode associates an InstallModeType with a flag representing if the CSV supports it keywords array (string) A list of keywords describing the operator. labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. links array A list of links related to the operator. links[] object maintainers array A list of organizational entities maintaining the operator. maintainers[] object maturity string minKubeVersion string nativeAPIs array nativeAPIs[] object GroupVersionKind unambiguously identifies a kind. It doesn't anonymously include GroupVersion to avoid automatic coercion. It doesn't use a GroupVersion to avoid custom marshalling provider object The publishing entity behind the operator. relatedImages array List any related images, or other container images that your Operator might require to perform their functions. This list should also include operand images as well. All image references should be specified by digest (SHA) and not by tag. This field is only used during catalog creation and plays no part in cluster runtime. relatedImages[] object replaces string The name of a CSV this one replaces. Should match the metadata.Name field of the old CSV. selector object Label selector for related resources. skips array (string) The name(s) of one or more CSV(s) that should be skipped in the upgrade graph. Should match the metadata.Name field of the CSV that should be skipped. This field is only used during catalog creation and plays no part in cluster runtime. version string webhookdefinitions array webhookdefinitions[] object WebhookDescription provides details to OLM about required webhooks 3.1.2. .spec.apiservicedefinitions Description APIServiceDefinitions declares all of the extension apis managed or required by an operator being ran by ClusterServiceVersion. Type object Property Type Description owned array owned[] object APIServiceDescription provides details to OLM about apis provided via aggregation required array required[] object APIServiceDescription provides details to OLM about apis provided via aggregation 3.1.3. .spec.apiservicedefinitions.owned Description Type array 3.1.4. .spec.apiservicedefinitions.owned[] Description APIServiceDescription provides details to OLM about apis provided via aggregation Type object Required group kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance containerPort integer deploymentName string description string displayName string group string kind string name string resources array resources[] object APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.5. .spec.apiservicedefinitions.owned[].actionDescriptors Description Type array 3.1.6. .spec.apiservicedefinitions.owned[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.7. .spec.apiservicedefinitions.owned[].resources Description Type array 3.1.8. .spec.apiservicedefinitions.owned[].resources[] Description APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. Type object Required kind name version Property Type Description kind string Kind of the referenced resource type. name string Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. version string API Version of the referenced resource type. 3.1.9. .spec.apiservicedefinitions.owned[].specDescriptors Description Type array 3.1.10. .spec.apiservicedefinitions.owned[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.11. .spec.apiservicedefinitions.owned[].statusDescriptors Description Type array 3.1.12. .spec.apiservicedefinitions.owned[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.13. .spec.apiservicedefinitions.required Description Type array 3.1.14. .spec.apiservicedefinitions.required[] Description APIServiceDescription provides details to OLM about apis provided via aggregation Type object Required group kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance containerPort integer deploymentName string description string displayName string group string kind string name string resources array resources[] object APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.15. .spec.apiservicedefinitions.required[].actionDescriptors Description Type array 3.1.16. .spec.apiservicedefinitions.required[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.17. .spec.apiservicedefinitions.required[].resources Description Type array 3.1.18. .spec.apiservicedefinitions.required[].resources[] Description APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. Type object Required kind name version Property Type Description kind string Kind of the referenced resource type. name string Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. version string API Version of the referenced resource type. 3.1.19. .spec.apiservicedefinitions.required[].specDescriptors Description Type array 3.1.20. .spec.apiservicedefinitions.required[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.21. .spec.apiservicedefinitions.required[].statusDescriptors Description Type array 3.1.22. .spec.apiservicedefinitions.required[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.23. .spec.cleanup Description Cleanup specifies the cleanup behaviour when the CSV gets deleted Type object Required enabled Property Type Description enabled boolean 3.1.24. .spec.customresourcedefinitions Description CustomResourceDefinitions declares all of the CRDs managed or required by an operator being ran by ClusterServiceVersion. If the CRD is present in the Owned list, it is implicitly required. Type object Property Type Description owned array owned[] object CRDDescription provides details to OLM about the CRDs required array required[] object CRDDescription provides details to OLM about the CRDs 3.1.25. .spec.customresourcedefinitions.owned Description Type array 3.1.26. .spec.customresourcedefinitions.owned[] Description CRDDescription provides details to OLM about the CRDs Type object Required kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance description string displayName string kind string name string resources array resources[] object APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.27. .spec.customresourcedefinitions.owned[].actionDescriptors Description Type array 3.1.28. .spec.customresourcedefinitions.owned[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.29. .spec.customresourcedefinitions.owned[].resources Description Type array 3.1.30. .spec.customresourcedefinitions.owned[].resources[] Description APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. Type object Required kind name version Property Type Description kind string Kind of the referenced resource type. name string Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. version string API Version of the referenced resource type. 3.1.31. .spec.customresourcedefinitions.owned[].specDescriptors Description Type array 3.1.32. .spec.customresourcedefinitions.owned[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.33. .spec.customresourcedefinitions.owned[].statusDescriptors Description Type array 3.1.34. .spec.customresourcedefinitions.owned[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.35. .spec.customresourcedefinitions.required Description Type array 3.1.36. .spec.customresourcedefinitions.required[] Description CRDDescription provides details to OLM about the CRDs Type object Required kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance description string displayName string kind string name string resources array resources[] object APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.37. .spec.customresourcedefinitions.required[].actionDescriptors Description Type array 3.1.38. .spec.customresourcedefinitions.required[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.39. .spec.customresourcedefinitions.required[].resources Description Type array 3.1.40. .spec.customresourcedefinitions.required[].resources[] Description APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. Type object Required kind name version Property Type Description kind string Kind of the referenced resource type. name string Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. version string API Version of the referenced resource type. 3.1.41. .spec.customresourcedefinitions.required[].specDescriptors Description Type array 3.1.42. .spec.customresourcedefinitions.required[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.43. .spec.customresourcedefinitions.required[].statusDescriptors Description Type array 3.1.44. .spec.customresourcedefinitions.required[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.45. .spec.icon Description The icon for this operator. Type array 3.1.46. .spec.icon[] Description Type object Required base64data mediatype Property Type Description base64data string mediatype string 3.1.47. .spec.install Description NamedInstallStrategy represents the block of an ClusterServiceVersion resource where the install strategy is specified. Type object Required strategy Property Type Description spec object StrategyDetailsDeployment represents the parsed details of a Deployment InstallStrategy. strategy string 3.1.48. .spec.install.spec Description StrategyDetailsDeployment represents the parsed details of a Deployment InstallStrategy. Type object Required deployments Property Type Description clusterPermissions array clusterPermissions[] object StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy deployments array deployments[] object StrategyDeploymentSpec contains the name, spec and labels for the deployment ALM should create permissions array permissions[] object StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy 3.1.49. .spec.install.spec.clusterPermissions Description Type array 3.1.50. .spec.install.spec.clusterPermissions[] Description StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy Type object Required rules serviceAccountName Property Type Description rules array rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. serviceAccountName string 3.1.51. .spec.install.spec.clusterPermissions[].rules Description Type array 3.1.52. .spec.install.spec.clusterPermissions[].rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "" represents the core API group and "*" represents all API groups. nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. '*' represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs. 3.1.53. .spec.install.spec.deployments Description Type array 3.1.54. .spec.install.spec.deployments[] Description StrategyDeploymentSpec contains the name, spec and labels for the deployment ALM should create Type object Required name spec Property Type Description label object (string) Set is a map of label:value. It implements Labels. name string spec object DeploymentSpec is the specification of the desired behavior of the Deployment. 3.1.55. .spec.install.spec.deployments[].spec Description DeploymentSpec is the specification of the desired behavior of the Deployment. Type object Required selector template Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) paused boolean Indicates that the deployment is paused. progressDeadlineSeconds integer The maximum time in seconds for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. Defaults to 600s. replicas integer Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1. revisionHistoryLimit integer The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. selector object Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. strategy object The deployment strategy to use to replace existing pods with new ones. template object Template describes the pods that will be created. The only allowed template.spec.restartPolicy value is "Always". 3.1.56. .spec.install.spec.deployments[].spec.selector Description Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.57. .spec.install.spec.deployments[].spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.58. .spec.install.spec.deployments[].spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.59. .spec.install.spec.deployments[].spec.strategy Description The deployment strategy to use to replace existing pods with new ones. Type object Property Type Description rollingUpdate object Rolling update config params. Present only if DeploymentStrategyType = RollingUpdate. --- TODO: Update this to follow our convention for oneOf, whatever we decide it to be. type string Type of deployment. Can be "Recreate" or "RollingUpdate". Default is RollingUpdate. 3.1.60. .spec.install.spec.deployments[].spec.strategy.rollingUpdate Description Rolling update config params. Present only if DeploymentStrategyType = RollingUpdate. --- TODO: Update this to follow our convention for oneOf, whatever we decide it to be. Type object Property Type Description maxSurge integer-or-string The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. Defaults to 25%. Example: when this is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new ReplicaSet can be scaled up further, ensuring that total number of pods running at any time during the update is at most 130% of desired pods. maxUnavailable integer-or-string The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding down. This can not be 0 if MaxSurge is 0. Defaults to 25%. Example: when this is set to 30%, the old ReplicaSet can be scaled down to 70% of desired pods immediately when the rolling update starts. Once new pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of pods available at all times during the update is at least 70% of desired pods. 3.1.61. .spec.install.spec.deployments[].spec.template Description Template describes the pods that will be created. The only allowed template.spec.restartPolicy value is "Always". Type object Property Type Description metadata `` Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 3.1.62. .spec.install.spec.deployments[].spec.template.spec Description Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Required containers Property Type Description activeDeadlineSeconds integer Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer. affinity object If specified, the pod's scheduling constraints automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted. containers array List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. containers[] object A single application container that you want to run within a pod. dnsConfig object Specifies the DNS parameters of a pod. Parameters specified here will be merged to the generated DNS configuration based on DNSPolicy. dnsPolicy string Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. enableServiceLinks boolean EnableServiceLinks indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true. ephemeralContainers array List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. ephemeralContainers[] object An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. hostAliases array HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. hostIPC boolean Use the host's ipc namespace. Optional: Default to false. hostNetwork boolean Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. hostPID boolean Use the host's pid namespace. Optional: Default to false. hostUsers boolean Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature. hostname string Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value. imagePullSecrets array ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ initContainers[] object A single application container that you want to run within a pod. nodeName string NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ os object Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set. If the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions If the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.hostUsers - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.containers[ ].securityContext.seLinuxOptions - spec.containers[ ].securityContext.seccompProfile - spec.containers[ ].securityContext.capabilities - spec.containers[ ].securityContext.readOnlyRootFilesystem - spec.containers[ ].securityContext.privileged - spec.containers[ ].securityContext.allowPrivilegeEscalation - spec.containers[ ].securityContext.procMount - spec.containers[ ].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup overhead integer-or-string Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md preemptionPolicy string PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. priority integer The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority. priorityClassName string If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. readinessGates array If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates readinessGates[] object PodReadinessGate contains the reference to a pod condition resourceClaims array ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. resourceClaims[] object PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. restartPolicy string Restart policy for all containers within the pod. One of Always, OnFailure, Never. In some contexts, only a subset of those values may be permitted. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy runtimeClassName string RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class schedulerName string If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler. schedulingGates array SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. SchedulingGates can only be set at pod creation time, and be removed only afterwards. This is a beta feature enabled by the PodSchedulingReadiness feature gate. schedulingGates[] object PodSchedulingGate is associated to a Pod to guard its scheduling. securityContext object SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. serviceAccount string DeprecatedServiceAccount is a depreciated alias for ServiceAccountName. Deprecated: Use serviceAccountName instead. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run this pod. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ setHostnameAsFQDN boolean If true the pod's hostname will be configured as the pod's FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters to FQDN. If a pod does not have FQDN, this has no effect. Default to false. shareProcessNamespace boolean Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false. subdomain string If specified, the fully qualified Pod hostname will be "<hostname>.<subdomain>.<pod namespace>.svc.<cluster domain>". If not specified, the pod will not have a domainname at all. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. volumes array List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. 3.1.63. .spec.install.spec.deployments[].spec.template.spec.affinity Description If specified, the pod's scheduling constraints Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 3.1.64. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 3.1.65. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 3.1.66. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 3.1.67. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 3.1.68. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 3.1.69. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.70. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 3.1.71. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.72. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 3.1.73. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 3.1.74. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 3.1.75. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 3.1.76. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.77. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 3.1.78. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.79. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 3.1.80. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 3.1.81. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 3.1.82. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.83. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.84. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.85. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.86. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.87. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.88. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.89. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 3.1.90. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.91. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.92. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.93. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.94. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.95. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.96. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.97. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 3.1.98. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 3.1.99. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 3.1.100. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.101. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.102. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.103. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.104. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.105. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.106. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.107. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 3.1.108. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. Also, MatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with LabelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both MismatchLabelKeys and LabelSelector. Also, MismatchLabelKeys cannot be set when LabelSelector isn't set. This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.109. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.110. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.111. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.112. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.113. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.114. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.115. .spec.install.spec.deployments[].spec.template.spec.containers Description List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. Type array 3.1.116. .spec.install.spec.deployments[].spec.template.spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.117. .spec.install.spec.deployments[].spec.template.spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.118. .spec.install.spec.deployments[].spec.template.spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 3.1.119. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 3.1.120. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.121. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.122. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.123. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.124. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.125. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 3.1.126. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 3.1.127. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 3.1.128. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 3.1.129. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.130. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.131. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.132. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.133. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.134. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 3.1.135. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.136. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.137. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.138. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.139. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.140. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.141. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 3.1.142. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.143. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.144. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.145. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.146. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.147. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.148. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.149. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.150. .spec.install.spec.deployments[].spec.template.spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 3.1.151. .spec.install.spec.deployments[].spec.template.spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 3.1.152. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.153. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.154. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.155. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.156. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.157. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.158. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.159. .spec.install.spec.deployments[].spec.template.spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 3.1.160. .spec.install.spec.deployments[].spec.template.spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 3.1.161. .spec.install.spec.deployments[].spec.template.spec.containers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.162. .spec.install.spec.deployments[].spec.template.spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.163. .spec.install.spec.deployments[].spec.template.spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.164. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.165. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.166. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.167. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.168. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.169. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.170. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.171. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.172. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.173. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.174. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.175. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.176. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.177. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.178. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 3.1.179. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.180. .spec.install.spec.deployments[].spec.template.spec.dnsConfig Description Specifies the DNS parameters of a pod. Parameters specified here will be merged to the generated DNS configuration based on DNSPolicy. Type object Property Type Description nameservers array (string) A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. options array A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. options[] object PodDNSConfigOption defines DNS resolver options of a pod. searches array (string) A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. 3.1.181. .spec.install.spec.deployments[].spec.template.spec.dnsConfig.options Description A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. Type array 3.1.182. .spec.install.spec.deployments[].spec.template.spec.dnsConfig.options[] Description PodDNSConfigOption defines DNS resolver options of a pod. Type object Property Type Description name string Required. value string 3.1.183. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers Description List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. Type array 3.1.184. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[] Description An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Lifecycle is not allowed for ephemeral containers. livenessProbe object Probes are not allowed for ephemeral containers. name string Name of the ephemeral container specified as a DNS_LABEL. This name must be unique among all containers, init containers and ephemeral containers. ports array Ports are not allowed for ephemeral containers. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probes are not allowed for ephemeral containers. resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod. restartPolicy string Restart policy for the container to manage the restart behavior of each container within a pod. This may only be set for init containers. You cannot set this field on ephemeral containers. securityContext object Optional: SecurityContext defines the security options the ephemeral container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. startupProbe object Probes are not allowed for ephemeral containers. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false targetContainerName string If set, the name of the container from PodSpec that this ephemeral container targets. The ephemeral container will be run in the namespaces (IPC, PID, etc) of this container. If not set then the ephemeral container uses the namespaces configured in the Pod spec. The container runtime must implement support for this feature. If the runtime does not support namespace targeting then the result of setting this field is undefined. terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.185. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.186. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 3.1.187. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 3.1.188. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.189. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.190. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.191. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.192. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.193. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 3.1.194. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 3.1.195. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 3.1.196. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle Description Lifecycle is not allowed for ephemeral containers. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 3.1.197. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.198. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.199. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.200. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.201. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.202. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 3.1.203. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.204. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.205. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.206. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.207. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.208. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.209. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 3.1.210. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.211. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe Description Probes are not allowed for ephemeral containers. Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.212. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.213. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.214. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.215. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.216. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.217. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.218. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].ports Description Ports are not allowed for ephemeral containers. Type array 3.1.219. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 3.1.220. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe Description Probes are not allowed for ephemeral containers. Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.221. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.222. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.223. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.224. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.225. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.226. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.227. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resizePolicy Description Resources resize policy for the container. Type array 3.1.228. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 3.1.229. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resources Description Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.230. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.231. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.232. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext Description Optional: SecurityContext defines the security options the ephemeral container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.233. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.234. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.235. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.236. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.237. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe Description Probes are not allowed for ephemeral containers. Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.238. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.239. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.240. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.241. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.242. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.243. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.244. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.245. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.246. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. Type array 3.1.247. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.248. .spec.install.spec.deployments[].spec.template.spec.hostAliases Description HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. Type array 3.1.249. .spec.install.spec.deployments[].spec.template.spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 3.1.250. .spec.install.spec.deployments[].spec.template.spec.imagePullSecrets Description ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod Type array 3.1.251. .spec.install.spec.deployments[].spec.template.spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.252. .spec.install.spec.deployments[].spec.template.spec.initContainers Description List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Type array 3.1.253. .spec.install.spec.deployments[].spec.template.spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.254. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.255. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 3.1.256. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 3.1.257. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.258. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.259. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.260. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.261. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.262. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 3.1.263. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 3.1.264. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 3.1.265. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 3.1.266. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.267. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.268. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.269. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.270. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.271. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 3.1.272. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.273. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.274. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.275. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.276. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.277. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.278. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 3.1.279. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.280. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.281. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.282. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.283. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.284. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.285. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.286. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.287. .spec.install.spec.deployments[].spec.template.spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 3.1.288. .spec.install.spec.deployments[].spec.template.spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 3.1.289. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.290. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.291. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.292. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.293. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.294. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.295. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.296. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 3.1.297. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 3.1.298. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.299. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.300. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.301. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.302. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.303. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.304. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.305. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.306. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.307. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.308. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.309. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.310. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.311. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.312. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.313. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.314. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.315. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 3.1.316. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.317. .spec.install.spec.deployments[].spec.template.spec.os Description Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set. If the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions If the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.hostUsers - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.containers[ ].securityContext.seLinuxOptions - spec.containers[ ].securityContext.seccompProfile - spec.containers[ ].securityContext.capabilities - spec.containers[ ].securityContext.readOnlyRootFilesystem - spec.containers[ ].securityContext.privileged - spec.containers[ ].securityContext.allowPrivilegeEscalation - spec.containers[ ].securityContext.procMount - spec.containers[ ].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup Type object Required name Property Type Description name string Name is the name of the operating system. The currently supported values are linux and windows. Additional value may be defined in future and can be one of: https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration Clients should expect to handle additional values and treat unrecognized values in this field as os: null 3.1.318. .spec.install.spec.deployments[].spec.template.spec.readinessGates Description If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates Type array 3.1.319. .spec.install.spec.deployments[].spec.template.spec.readinessGates[] Description PodReadinessGate contains the reference to a pod condition Type object Required conditionType Property Type Description conditionType string ConditionType refers to a condition in the pod's condition list with matching type. 3.1.320. .spec.install.spec.deployments[].spec.template.spec.resourceClaims Description ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. Type array 3.1.321. .spec.install.spec.deployments[].spec.template.spec.resourceClaims[] Description PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. Type object Required name Property Type Description name string Name uniquely identifies this resource claim inside the pod. This must be a DNS_LABEL. source object Source describes where to find the ResourceClaim. 3.1.322. .spec.install.spec.deployments[].spec.template.spec.resourceClaims[].source Description Source describes where to find the ResourceClaim. Type object Property Type Description resourceClaimName string ResourceClaimName is the name of a ResourceClaim object in the same namespace as this pod. resourceClaimTemplateName string ResourceClaimTemplateName is the name of a ResourceClaimTemplate object in the same namespace as this pod. The template will be used to create a new ResourceClaim, which will be bound to this pod. When this pod is deleted, the ResourceClaim will also be deleted. The pod name and resource name, along with a generated component, will be used to form a unique name for the ResourceClaim, which will be recorded in pod.status.resourceClaimStatuses. This field is immutable and no changes will be made to the corresponding ResourceClaim by the control plane after creating the ResourceClaim. 3.1.323. .spec.install.spec.deployments[].spec.template.spec.schedulingGates Description SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. SchedulingGates can only be set at pod creation time, and be removed only afterwards. This is a beta feature enabled by the PodSchedulingReadiness feature gate. Type array 3.1.324. .spec.install.spec.deployments[].spec.template.spec.schedulingGates[] Description PodSchedulingGate is associated to a Pod to guard its scheduling. Type object Required name Property Type Description name string Name of the scheduling gate. Each scheduling gate must have a unique name field. 3.1.325. .spec.install.spec.deployments[].spec.template.spec.securityContext Description SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.326. .spec.install.spec.deployments[].spec.template.spec.securityContext.seLinuxOptions Description The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.327. .spec.install.spec.deployments[].spec.template.spec.securityContext.seccompProfile Description The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.328. .spec.install.spec.deployments[].spec.template.spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 3.1.329. .spec.install.spec.deployments[].spec.template.spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 3.1.330. .spec.install.spec.deployments[].spec.template.spec.securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.331. .spec.install.spec.deployments[].spec.template.spec.tolerations Description If specified, the pod's tolerations. Type array 3.1.332. .spec.install.spec.deployments[].spec.template.spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 3.1.333. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints Description TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. Type array 3.1.334. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 3.1.335. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.336. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.337. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.338. .spec.install.spec.deployments[].spec.template.spec.volumes Description List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 3.1.339. .spec.install.spec.deployments[].spec.template.spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 3.1.340. .spec.install.spec.deployments[].spec.template.spec.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 3.1.341. .spec.install.spec.deployments[].spec.template.spec.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 3.1.342. .spec.install.spec.deployments[].spec.template.spec.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 3.1.343. .spec.install.spec.deployments[].spec.template.spec.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 3.1.344. .spec.install.spec.deployments[].spec.template.spec.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.345. .spec.install.spec.deployments[].spec.template.spec.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 3.1.346. .spec.install.spec.deployments[].spec.template.spec.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.347. .spec.install.spec.deployments[].spec.template.spec.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 3.1.348. .spec.install.spec.deployments[].spec.template.spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.349. .spec.install.spec.deployments[].spec.template.spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.350. .spec.install.spec.deployments[].spec.template.spec.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 3.1.351. .spec.install.spec.deployments[].spec.template.spec.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.352. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 3.1.353. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 3.1.354. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 3.1.355. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.356. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.357. .spec.install.spec.deployments[].spec.template.spec.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 3.1.358. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 3.1.359. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 3.1.360. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 3.1.361. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#volumeattributesclass (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 3.1.362. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 3.1.363. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 3.1.364. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.365. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.366. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.367. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.368. .spec.install.spec.deployments[].spec.template.spec.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 3.1.369. .spec.install.spec.deployments[].spec.template.spec.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 3.1.370. .spec.install.spec.deployments[].spec.template.spec.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.371. .spec.install.spec.deployments[].spec.template.spec.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 3.1.372. .spec.install.spec.deployments[].spec.template.spec.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 3.1.373. .spec.install.spec.deployments[].spec.template.spec.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 3.1.374. .spec.install.spec.deployments[].spec.template.spec.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 3.1.375. .spec.install.spec.deployments[].spec.template.spec.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 3.1.376. .spec.install.spec.deployments[].spec.template.spec.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 3.1.377. .spec.install.spec.deployments[].spec.template.spec.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.378. .spec.install.spec.deployments[].spec.template.spec.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 3.1.379. .spec.install.spec.deployments[].spec.template.spec.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 3.1.380. .spec.install.spec.deployments[].spec.template.spec.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 3.1.381. .spec.install.spec.deployments[].spec.template.spec.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 3.1.382. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 3.1.383. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources Description sources is the list of volume projections Type array 3.1.384. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description clusterTrustBundle object ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 3.1.385. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].clusterTrustBundle Description ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. Type object Required path Property Type Description labelSelector object Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". name string Select a single ClusterTrustBundle by object name. Mutually-exclusive with signerName and labelSelector. optional boolean If true, don't block pod startup if the referenced ClusterTrustBundle(s) aren't available. If using name, then the named ClusterTrustBundle is allowed not to exist. If using signerName, then the combination of signerName and labelSelector is allowed to match zero ClusterTrustBundles. path string Relative path from the volume root to write the bundle. signerName string Select all ClusterTrustBundles that match this signer name. Mutually-exclusive with name. The contents of all selected ClusterTrustBundles will be unified and deduplicated. 3.1.386. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector Description Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.387. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.388. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.389. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 3.1.390. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.391. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.392. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 3.1.393. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 3.1.394. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 3.1.395. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.396. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.397. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional field specify whether the Secret or its key must be defined 3.1.398. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.399. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.400. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 3.1.401. .spec.install.spec.deployments[].spec.template.spec.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 3.1.402. .spec.install.spec.deployments[].spec.template.spec.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 3.1.403. .spec.install.spec.deployments[].spec.template.spec.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.404. .spec.install.spec.deployments[].spec.template.spec.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 3.1.405. .spec.install.spec.deployments[].spec.template.spec.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.406. .spec.install.spec.deployments[].spec.template.spec.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 3.1.407. .spec.install.spec.deployments[].spec.template.spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.408. .spec.install.spec.deployments[].spec.template.spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.409. .spec.install.spec.deployments[].spec.template.spec.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 3.1.410. .spec.install.spec.deployments[].spec.template.spec.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.411. .spec.install.spec.deployments[].spec.template.spec.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 3.1.412. .spec.install.spec.permissions Description Type array 3.1.413. .spec.install.spec.permissions[] Description StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy Type object Required rules serviceAccountName Property Type Description rules array rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. serviceAccountName string 3.1.414. .spec.install.spec.permissions[].rules Description Type array 3.1.415. .spec.install.spec.permissions[].rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "" represents the core API group and "*" represents all API groups. nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. '*' represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs. 3.1.416. .spec.installModes Description InstallModes specify supported installation types Type array 3.1.417. .spec.installModes[] Description InstallMode associates an InstallModeType with a flag representing if the CSV supports it Type object Required supported type Property Type Description supported boolean type string InstallModeType is a supported type of install mode for CSV installation 3.1.418. .spec.links Description A list of links related to the operator. Type array 3.1.419. .spec.links[] Description Type object Property Type Description name string url string 3.1.420. .spec.maintainers Description A list of organizational entities maintaining the operator. Type array 3.1.421. .spec.maintainers[] Description Type object Property Type Description email string name string 3.1.422. .spec.nativeAPIs Description Type array 3.1.423. .spec.nativeAPIs[] Description GroupVersionKind unambiguously identifies a kind. It doesn't anonymously include GroupVersion to avoid automatic coercion. It doesn't use a GroupVersion to avoid custom marshalling Type object Required group kind version Property Type Description group string kind string version string 3.1.424. .spec.provider Description The publishing entity behind the operator. Type object Property Type Description name string url string 3.1.425. .spec.relatedImages Description List any related images, or other container images that your Operator might require to perform their functions. This list should also include operand images as well. All image references should be specified by digest (SHA) and not by tag. This field is only used during catalog creation and plays no part in cluster runtime. Type array 3.1.426. .spec.relatedImages[] Description Type object Required image name Property Type Description image string name string 3.1.427. .spec.selector Description Label selector for related resources. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.428. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.429. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.430. .spec.webhookdefinitions Description Type array 3.1.431. .spec.webhookdefinitions[] Description WebhookDescription provides details to OLM about required webhooks Type object Required admissionReviewVersions generateName sideEffects type Property Type Description admissionReviewVersions array (string) containerPort integer conversionCRDs array (string) deploymentName string failurePolicy string FailurePolicyType specifies a failure policy that defines how unrecognized errors from the admission endpoint are handled. generateName string matchPolicy string MatchPolicyType specifies the type of match policy. objectSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. reinvocationPolicy string ReinvocationPolicyType specifies what type of policy the admission hook uses. rules array rules[] object RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. sideEffects string SideEffectClass specifies the types of side effects a webhook may have. targetPort integer-or-string timeoutSeconds integer type string WebhookAdmissionType is the type of admission webhooks supported by OLM webhookPath string 3.1.432. .spec.webhookdefinitions[].objectSelector Description A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.433. .spec.webhookdefinitions[].objectSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.434. .spec.webhookdefinitions[].objectSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.435. .spec.webhookdefinitions[].rules Description Type array 3.1.436. .spec.webhookdefinitions[].rules[] Description RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. Type object Property Type Description apiGroups array (string) APIGroups is the API groups the resources belong to. ' ' is all groups. If ' ' is present, the length of the slice must be one. Required. apiVersions array (string) APIVersions is the API versions the resources belong to. ' ' is all versions. If ' ' is present, the length of the slice must be one. Required. operations array (string) Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. resources array (string) Resources is a list of resources this rule applies to. For example: 'pods' means pods. 'pods/log' means the log subresource of pods. ' ' means all resources, but not subresources. 'pods/ ' means all subresources of pods. ' /scale' means all scale subresources. ' /*' means all resources and their subresources. If wildcard is present, the validation rule will ensure resources do not overlap with each other. Depending on the enclosing object, subresources might not be allowed. Required. scope string scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and " " "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. " " means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". 3.1.437. .status Description ClusterServiceVersionStatus represents information about the status of a CSV. Status may trail the actual state of a system. Type object Property Type Description certsLastUpdated string Last time the owned APIService certs were updated certsRotateAt string Time the owned APIService certs will rotate cleanup object CleanupStatus represents information about the status of cleanup while a CSV is pending deletion conditions array List of conditions, a history of state transitions conditions[] object Conditions appear in the status as a record of state transitions on the ClusterServiceVersion lastTransitionTime string Last time the status transitioned from one status to another. lastUpdateTime string Last time we updated the status message string A human readable message indicating details about why the ClusterServiceVersion is in this condition. phase string Current condition of the ClusterServiceVersion reason string A brief CamelCase message indicating details about why the ClusterServiceVersion is in this state. e.g. 'RequirementsNotMet' requirementStatus array The status of each requirement for this CSV requirementStatus[] object 3.1.438. .status.cleanup Description CleanupStatus represents information about the status of cleanup while a CSV is pending deletion Type object Property Type Description pendingDeletion array PendingDeletion is the list of custom resource objects that are pending deletion and blocked on finalizers. This indicates the progress of cleanup that is blocking CSV deletion or operator uninstall. pendingDeletion[] object ResourceList represents a list of resources which are of the same Group/Kind 3.1.439. .status.cleanup.pendingDeletion Description PendingDeletion is the list of custom resource objects that are pending deletion and blocked on finalizers. This indicates the progress of cleanup that is blocking CSV deletion or operator uninstall. Type array 3.1.440. .status.cleanup.pendingDeletion[] Description ResourceList represents a list of resources which are of the same Group/Kind Type object Required group instances kind Property Type Description group string instances array instances[] object kind string 3.1.441. .status.cleanup.pendingDeletion[].instances Description Type array 3.1.442. .status.cleanup.pendingDeletion[].instances[] Description Type object Required name Property Type Description name string namespace string Namespace can be empty for cluster-scoped resources 3.1.443. .status.conditions Description List of conditions, a history of state transitions Type array 3.1.444. .status.conditions[] Description Conditions appear in the status as a record of state transitions on the ClusterServiceVersion Type object Property Type Description lastTransitionTime string Last time the status transitioned from one status to another. lastUpdateTime string Last time we updated the status message string A human readable message indicating details about why the ClusterServiceVersion is in this condition. phase string Condition of the ClusterServiceVersion reason string A brief CamelCase message indicating details about why the ClusterServiceVersion is in this state. e.g. 'RequirementsNotMet' 3.1.445. .status.requirementStatus Description The status of each requirement for this CSV Type array 3.1.446. .status.requirementStatus[] Description Type object Required group kind message name status version Property Type Description dependents array dependents[] object DependentStatus is the status for a dependent requirement (to prevent infinite nesting) group string kind string message string name string status string StatusReason is a camelcased reason for the status of a RequirementStatus or DependentStatus uuid string version string 3.1.447. .status.requirementStatus[].dependents Description Type array 3.1.448. .status.requirementStatus[].dependents[] Description DependentStatus is the status for a dependent requirement (to prevent infinite nesting) Type object Required group kind status version Property Type Description group string kind string message string status string StatusReason is a camelcased reason for the status of a RequirementStatus or DependentStatus uuid string version string 3.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1alpha1/clusterserviceversions GET : list objects of kind ClusterServiceVersion /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions DELETE : delete collection of ClusterServiceVersion GET : list objects of kind ClusterServiceVersion POST : create a ClusterServiceVersion /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name} DELETE : delete a ClusterServiceVersion GET : read the specified ClusterServiceVersion PATCH : partially update the specified ClusterServiceVersion PUT : replace the specified ClusterServiceVersion /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name}/status GET : read status of the specified ClusterServiceVersion PATCH : partially update status of the specified ClusterServiceVersion PUT : replace status of the specified ClusterServiceVersion 3.2.1. /apis/operators.coreos.com/v1alpha1/clusterserviceversions HTTP method GET Description list objects of kind ClusterServiceVersion Table 3.1. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersionList schema 401 - Unauthorized Empty 3.2.2. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions HTTP method DELETE Description delete collection of ClusterServiceVersion Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterServiceVersion Table 3.3. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersionList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterServiceVersion Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body ClusterServiceVersion schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 201 - Created ClusterServiceVersion schema 202 - Accepted ClusterServiceVersion schema 401 - Unauthorized Empty 3.2.3. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name} Table 3.7. Global path parameters Parameter Type Description name string name of the ClusterServiceVersion HTTP method DELETE Description delete a ClusterServiceVersion Table 3.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterServiceVersion Table 3.10. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterServiceVersion Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterServiceVersion Table 3.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.14. Body parameters Parameter Type Description body ClusterServiceVersion schema Table 3.15. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 201 - Created ClusterServiceVersion schema 401 - Unauthorized Empty 3.2.4. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name}/status Table 3.16. Global path parameters Parameter Type Description name string name of the ClusterServiceVersion HTTP method GET Description read status of the specified ClusterServiceVersion Table 3.17. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterServiceVersion Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.19. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterServiceVersion Table 3.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.21. Body parameters Parameter Type Description body ClusterServiceVersion schema Table 3.22. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 201 - Created ClusterServiceVersion schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operatorhub_apis/clusterserviceversion-operators-coreos-com-v1alpha1 |
Chapter 55. SAP Component | Chapter 55. SAP Component The SAP component is a package consisting of ten different SAP components. There are remote function call (RFC) components that support the sRFC, tRFC, and qRFC protocols and there are IDoc components that facilitate communication using messages in IDoc format. The component uses the SAP Java Connector (SAP JCo) library to facilitate bidirectional communication with SAP and the SAP IDoc library to transmit the documents in the Intermediate Document (IDoc) format. 55.1. Dependencies Add the following dependency to your pom.xml for this component: <dependency> <groupId>org.fusesource</groupId> <artifactId>camel-sap-starter</artifactId> <version>3.20.1.redhat-00056</version> </dependency> 55.1.1. Additional platform restrictions for the SAP component Because the SAP component depends on the third-party JCo 3 and IDoc 3 libraries, it can only be installed on the platforms that these libraries support. 55.1.2. SAP JCo and SAP IDoc libraries A prerequisite for using the SAP component is that the SAP Java Connector (SAP JCo) libraries and the SAP IDoc library are installed into the lib/ directory of the Java runtime. You must make sure that you download the appropriate set of SAP libraries for your target operating system from the SAP Service Marketplace. The names of the library files vary depending on the target operating system, as shown below. Table 55.1. Required SAP Libraries SAP Component Linux and UNIX Windows SAP JCo 3 sapjco3.jar libsapjco3.so sapjco3.jar sapjco3.dll SAP IDoc sapidoc3.jar sapidoc3.jar 55.2. URI format There are two different kinds of endpoint provided by the SAP component: the Remote Function Call (RFC) endpoints, and the Intermediate Document (IDoc) endpoints. The URI formats for the RFC endpoints are as follows: The URI formats for the IDoc endpoints are as follows: The URI formats prefixed by sap- endpointKind -destination are used to define destination endpoints (in other words, Camel producer endpoints) and destinationName is the name of a specific outbound connection to an SAP instance. Outbound connections are named and configured at the component level. The URI formats prefixed by sap- endpointKind -server are used to define server endpoints (in other words, Camel consumer endpoints) and serverName is the name of a specific inbound connection from an SAP instance. Inbound connections are named and configured at the component level. The other components of an RFC endpoint URI are as follows: rfcName (Required) In a destination endpoint URI, is the name of the RFC invoked by the endpoint in the connected SAP instance. In a server endpoint URI, is the name of the RFC handled by the endpoint when invoked from the connected SAP instance. queueName Specifies the queue this endpoint sends an SAP request to. The other components of an IDoc endpoint URI are as follows: idocType (Required) Specifies the Basic IDoc Type of an IDoc produced by this endpoint. idocTypeExtension Specifies the IDoc Type Extension, if any, of an IDoc produced by this endpoint. systemRelease Specifies the associated SAP Basis Release, if any, of an IDoc produced by this endpoint. applicationRelease Specifies the associated Application Release, if any, of an IDoc produced by this endpoint. queueName Specifies the queue this endpoint sends an SAP request to. 55.2.1. Options for RFC destination endpoints The RFC destination endpoints ( sap-srfc-destination , sap-trfc-destination , and sap-qrfc-destination ) support the following URI options: Name Default Description stateful false If true , specifies that this endpoint initiates an SAP stateful session transacted false If true , specifies that this endpoint initiates an SAP transaction 55.2.2. Options for RFC server endpoints The SAP RFC server endpoints ( sap-srfc-server and sap-trfc-server ) support the following URI options: Name Default Description stateful false If true , specifies that this endpoint initiates an SAP stateful session. propagateExceptions false (sap-trfc-server endpoint only) If true , specifies that this endpoint propagates exceptions back to the caller in SAP, instead of the exchange's exception handler. 55.2.3. Options for the IDoc List Server endpoint The SAP IDoc List Server endpoint ( sap-idoclist-server ) supports the following URI options: Name Default Description stateful false If true , specifies that this endpoint initiates an SAP stateful session. propagateExceptions false If true , specifies that this endpoint propagates exceptions back to the caller in SAP, instead of the exchange's exception handler. 55.2.4. Summary of the RFC and IDoc endpoints The SAP component package provides the following RFC and IDoc endpoints: sap-srfc-destination Camel SAP Synchronous Remote Function Call Destination Camel component. This endpoint should be used in cases where Camel routes require synchronous delivery of requests to and responses from an SAP system. Note The sRFC protocol used by this component delivers requests and responses to and from an SAP system with best effort . In case of a communication error while sending a request, the completion status of a remote function call in the receiving SAP system remains in doubt . sap-trfc-destination Camel SAP Transactional Remote Function Call Destination Camel component. This endpoint should be used in cases where requests must be delivered to the receiving SAP system at most once . To accomplish this, the component generates a transaction ID, tid , which accompanies every request sent through the component in a route's exchange. The receiving SAP system records the tid accompanying a request before delivering the request; if the SAP system receives the request again with the same tid it will not deliver the request. Thus if a route encounters a communication error when sending a request through an endpoint of this component, it can retry sending the request within the same exchange knowing it will be delivered and executed only once. Note The tRFC protocol used by this component is asynchronous and does not return a response. Thus the endpoints of this component do not return a response message. Note This component does not guarantee the order of a series of requests through its endpoints, and the delivery and execution order of these requests may differ on the receiving SAP system due to communication errors and resends of a request. For guaranteed delivery order, please see the Camel SAP Queued Remote Function Call Destination Camel component. sap-qrfc-destination Camel SAP Queued Remote Function Call Destination Camel component. This component extends the capabilities of the Transactional Remote Function Call Destination camel component by adding in order delivery guarantees to the delivery of requests through its endpoints. This endpoint should be used in cases where a series of requests depend on each other and must be delivered to the receiving SAP system at most once and in order . The component accomplishes the at most once delivery guarantees using the same mechanisms as the Camel SAP Transactional Remote Function Call Destination Camel component. The ordering guarantee is accomplished by serializing the requests in the order they are received by the SAP system to an inbound queue . Inbound queues are processed by the QIN scheduler within SAP. When the inbound queue is activated , the QIN Scheduler will execute the queue requests in order. Note The qRFC protocol used by this component is asynchronous and does not return a response. Thus the endpoints of this component do not return a response message. sap-srfc-server Camel SAP Synchronous Remote Function Call Server Camel component. This component and its endpoints should be used in cases where a Camel route is required to synchronously handle requests from and responses to an SAP system. sap-trfc-server Camel SAP Transactional Remote Function Call Server Camel component. This endpoint should be used in cases where the sending SAP system requires at most once delivery of its requests to a Camel route. To accomplish this, the sending SAP system generates a transaction ID, tid , which accompanies every request it sends to the component's endpoints. The sending SAP system will first check with the component whether a given tid has been received by it before sending a series of requests associated with the tid . The component will check the list of received tid s it maintains, record the sent tid if it is not in that list, and then respond to the sending SAP system, indicating whether or not the tid has already been recorded. The sending SAP system will only then send the series of requests, if the tid has not been previously recorded. This enables a sending SAP system to reliably send a series of requests once to a camel route. sap-idoc-destination Camel SAP IDoc Destination Camel component. This endpoint should be used in cases where a Camel route sends a list of Intermediate Documents (IDocs) to an SAP system. sap-idoclist-destination Camel SAP IDoc List Destination Camel component. This endpoint should be used in cases where a Camel route sends a list of Intermediate documents (IDocs) list to an SAP system. sap-qidoc-destination Camel SAP Queued IDoc Destination Camel component. This component and its endpoints should be used in cases where a Camel route is required to send a list of Intermediate documents (IDocs) to an SAP system in order. sap-qidoclist-destination Camel SAP Queued IDoc List Destination Camel component. This component and its endpoints are used in cases where a camel route sends the Intermediate documents (IDocs) list to an SAP system in order. sap-idoclist-server Camel SAP IDoc List Server Camel component. This endpoint should be used in cases where a sending SAP system requires delivery of Intermediate Document lists to a Camel route. This component uses the tRFC protocol to communicate with SAP as described in the sap-trfc-server-standalone quick start. 55.2.5. SAP RFC destination endpoint An RFC destination endpoint supports outbound communication to SAP, which enable these endpoints to make RFC calls out to ABAP function modules in SAP. An RFC destination endpoint is configured to make an RFC call to a specific ABAP function over a specific connection to an SAP instance. An RFC destination is a logical designation for an outbound connection and has a unique name. An RFC destination is specified by a set of connection parameters called destination data . An RFC destination endpoint will extract an RFC request from the input message of the IN-OUT exchanges it receives and dispatch that request in a function call to SAP. The response from the function call will be returned in the output message of the exchange. Since SAP RFC destination endpoints only support outbound communication, an RFC destination endpoint only supports the creation of producers. 55.2.6. SAP RFC server endpoint An RFC server endpoint supports inbound communication from SAP, which enables ABAP applications in SAP to make RFC calls into server endpoints. An ABAP application interacts with an RFC server endpoint as if it were a remote function module. An RFC server endpoint is configured to receive an RFC call to a specific RFC function over a specific connection from an SAP instance. An RFC server is a logical designation for an inbound connection and has a unique name. An RFC server is specified by a set of connection parameters called server data . An RFC server endpoint will handle an incoming RFC request and dispatch it as the input message of an IN-OUT exchange. The output message of the exchange will be returned as the response of the RFC call. Since SAP RFC server endpoints only support inbound communication, an RFC server endpoint only supports the creation of consumers. 55.2.7. SAP IDoc and IDoc list destination endpoints An IDoc destination endpoint supports outbound communication to SAP, which can then perform further processing on the IDoc message. An IDoc document represents a business transaction, which can easily be exchanged with non-SAP systems. An IDoc destination is specified by a set of connection parameters called destination data . An IDoc list destination endpoint is similar to an IDoc destination endpoint, except that the messages it handles consist of a list of IDoc documents. 55.2.8. SAP IDoc list server endpoint An IDoc list server endpoint supports inbound communication from SAP, enabling a Camel route to receive a list of IDoc documents from an SAP system. An IDoc list server is specified by a set of connection parameters called server data . 55.2.9. Metadata repositories A metadata repository is used to store the following kinds of metadata: Interface descriptions of function modules This metadata is used by the JCo and ABAP runtimes to check RFC calls to ensure the type-safe transfer of data between communication partners before dispatching those calls. A repository is populated with repository data. Repository data is a map of named function templates. A function template contains the metadata describing all the parameters and their typing information passed to and from a function module and has the unique name of the function module it describes. IDoc type descriptions This metadata is used by the IDoc runtime to ensure that the IDoc documents are correctly formatted before being sent to a communication partner. A basic IDoc type consists of a name, a list of permitted segments, and a description of the hierarchical relationship between the segments. Some additional constraints can be imposed on the segments: a segment can be mandatory or optional; and it is possible to specify a minimum/maximum range for each segment (defining the number of allowed repetitions of that segment). SAP destination and server endpoints thus require access to a repository, in order to send and receive RFC calls and in order to send and receive IDoc documents. For RFC calls, the metadata for all function modules invoked and handled by the endpoints must reside within the repository; and for IDoc endpoints, the metadata for all IDoc types and IDoc type extensions handled by the endpoints must reside within the repository. The location of the repository used by a destination and server endpoint is specified in the destination data and the server data of their respective connections. In the case of an SAP destination endpoint, the repository it uses typically resides in an SAP system and it defaults to the SAP system it is connected to. This default requires no explicit configuration in the destination data. Furthermore, the metadata for the remote function call that a destination endpoint makes will already exist in a repository for any existing function module that it calls. The metadata for calls made by destination endpoints thus require no configuration in the SAP component. On the other hand, the metadata for function calls handled by server endpoints do not typically reside in the repository of an SAP system and must instead be provided by a repository residing in the SAP component. The SAP component maintains a map of named metadata repositories. The name of a repository corresponds to the name of the server to which it provides metadata. 55.3. Configuration The SAP component maintains three maps to store destination data, server data, and repository data. The destination data store and the server data store are configured on a special configuration object, SapConnectionConfiguration , which automatically gets injected into the SAP component (in the context of Blueprint XML configuration or Spring XML configuration files). The repository data store must be configured directly on the relevant SAP component. 55.3.1. Configuration Overview The SAP component maintains three maps to store destination data, server data, and repository data. The component's property, destinationDataStore , stores destination data keyed by destination name, the property, serverDataStore , stores server data keyed by server name and the property, repositoryDataStore , stores repository data keyed by repository name. These configurations must be passed to the component during its initialization. Example The following example shows how to configure a sample destination data store and a sample server data store in a Blueprint XML file. The sap-configuration bean (of type SapConnectionConfiguration ) will automatically be injected into any SAP component that is used in this XML file. 55.3.2. Destination Configuration The configurations for destinations are maintained in the destinationDataStore property of the SAP component. Each entry in this map configures a distinct outbound connection to an SAP instance. The key for each entry is the name of the outbound connection and is used in the destinationName component of a destination endpoint URI as described in the URI format section. The value for each entry is a destination data configuration object - org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl - that specifies the configuration of an outbound SAP connection. Sample destination configuration The following Blueprint XML code shows how to configure a sample destination with the name, quickstartDest . For example, after configuring the destination as shown in the preceding Blueprint XML file, you could invoke the BAPI_FLCUST_GETLIST remote function call on the quickstartDest destination using the following URI: 55.3.2.1. Interceptor for tRFC and qRFC destinations The preceding sample destination configuration shows the instantiation of a CurrentProcessorDefinitionInterceptStrategy object. This object installs an interceptor in the Camel runtime, which enables the Camel SAP component to keep track of its position within a Camel route while it is handling RFC transactions. Important This interceptor is critically important for transactional RFC destination endpoints (such as sap-trfc-destination and sap-qrfc-destination ) and must be installed in the Camel runtime for outbound transactional RFC communication to be properly managed. The Destination RFC Transaction Handlers issues warnings into the Camel log if the strategy is not found at runtime. In this situation the Camel runtime will need to be re-provisioned and restarted to properly manage outbound transactional RFC communication. 55.3.2.2. Log on and authentication options The following table lists the log on and authentication options for configuring a destination in the SAP destination data store: Name Default Value Description client SAP client, mandatory log on parameter. user log on user, log on parameter for password based authentication. aliasUser log on user alias, can be used instead of log on user. userId User identity used for log on to the ABAP AS. Used by the JCo runtime, if the destination configuration uses SSO/assertion ticket, certificate, current user ,or SNC environment for authentication. The user ID is mandatory, if neither user nor user alias is set. This ID will never be sent to the SAP backend, it will be used by the JCo runtime locally. passwd log on password, log on parameter for password based authentication. lang log on language, if not defined, the default user language is used. mysapsso2 Use the specified SAP Cookie Version 2 as a log on ticket for SSO based authentication. x509cert Use the specified X509 certificate for certificate based authentication. lcheck Postpone the authentication until the first call - 1 (enable). Used in special cases only. useSapGui Use a visible, hidden, or do not use SAP GUI codePage Additional log on parameter to define the codepage used to convert the log on parameters. Used in special cases only. getsso2 Order an SSO ticket after log on, the obtained ticket is available in the destination attributes. denyInitialPassword If set to 1 , using initial passwords will lead to an exception (default is 0 ). 55.3.2.3. Connection options The following table lists the connection options for configuring a destination in the SAP destination data store: Name Default Value Description saprouter SAP Router string for connection to systems behind a SAP Router. SAP Router string contains the chain of SAP Routers and theirs port numbers and has the form: (/H/<host>[/S/<port>])+ . sysnr System number of the SAP ABAP application server, mandatory for a direct connection. ashost SAP ABAP application server, mandatory for a direct connection. mshost SAP message server, mandatory property for a load balancing connection. msserv SAP message server port, optional property for a load balancing connection. In order to resolve the service names sapmsXXX a lookup in etc/services is performed by the network layer of the operating system. If using port numbers instead of symbolic service names, no lookups are performed and no additional entries are needed. gwhost Allows specifying a concrete gateway, which should be used for establishing the connection to an application server. If not specified, the gateway on the application server is used. gwserv Should be set, when using gwhost. Allows specifying the port used on that gateway. If not specified, the port of the gateway on the application server is used. In order to resolve the service names sapgwXXX a lookup in etc/services is performed by the network layer of the operating system. If using port numbers instead of symbolic service names, no lookups are performed and no additional entries are needed. r3name System ID of the SAP system, mandatory property for a load balancing connection. group Group of SAP application servers, mandatory property for a load balancing connection. network LAN Set this value depending on the network quality between JCo and your target system to optimize performance. The valid values are LAN or WAN (which is relevant for fast serialization only). If you set the network configuration option to WAN , a slower but more efficient compression algorithm is used and the data is analyzed for further compression options. If you set the network configuration to LAN a very fast compression algorithm is used and data analysis is performed only at a very basic level. When you set the LAN option, the compression ratio is not as efficient but the network transfer time is considered to be less significant. The default setting is LAN . serializationFormat rowBased The valid values are rowBased or columnBased . For fast serialization columnBased must be set. The default serialization setting is rowBased . 55.3.2.4. Connection pool options The following table lists the connection pool options for configuring a destination in the SAP destination data store: Name Default Value Description peakLimit 0 Maximum number of active outbound connections that can be created for a destination simultaneously. A value of 0 allows an unlimited number of active connections. Otherwise, if the value is less than the value of jpoolCapacity , it will be automatically increased to this value. Default setting is the value of poolCapacity , or in case of poolCapacity not being specified as well, the default is 0 (unlimited). poolCapacity 1 Maximum number of idle outbound connections kept open by the destination. A value of 0 has the effect that there is no connection pooling (default is 1 ). expirationTime Time in milliseconds after which a free connection held internally by the destination can be closed. expirationPeriod Period in milliseconds after which the destination checks the released connections for expiration. maxGetTime Maximum time in milliseconds to wait for a connection, if the maximum allowed number of connections has already been allocated by the application. 55.3.2.5. Secure network connection options The following table lists the secure network options for configuring a destination in the SAP destination data store: Name Default Value Description sncMode Secure network connection (SNC) mode, 0 (off) or 1 (on). sncPartnername SNC partner, for example: p:CN=R3, O=XYZ-INC, C=EN . sncQop SNC level of security: 1 to 9 . sncMyname Own SNC name. Overrides the environment settings. sncLibrary Path to the library that provides the SNC service. 55.3.2.6. Repository options The following table lists the repository options for configuring a destination in the SAP destination data store: Name Default Value Description repositoryDest Specifies the destination which is used as a repository. repositoryUser If a repository destination is not set, and this property is set, it is used as user for repository calls. This enables you to use a different user for repository lookups. repositoryPasswd The password for a repository user. Mandatory, if a repository user is used. repositorySnc (Optional) If SNC is used for this destination, it is possible to turn it off for repository connections, if this property is set to 0 . Default setting is the value of jco.client.snc _mode. For special cases only. repositoryRoundtripOptimization Enable the RFC_METADATA_GET API, which provides the repository data in one single round trip. 1 Activates use of RFC_METADATA_GET in ABAP System. 0 Deactivates RFC_METADATA_GET in ABAP System. If the property is not set, the destination initially does a remote call to check whether RFC_METADATA_GET is available. If it is available, the destination will use it. Note: If the repository is already initialized (for example, because it is used by some other destination), this property does not have any effect. Generally, this property is related to the ABAP System, and should have the same value on all destinations pointing to the same ABAP System. See note 1456826 for backend prerequisites. 55.3.2.7. Trace configuration options The following table lists the trace configuration options for configuring a destination in the SAP destination data store: Name Default Value Description trace Enable/disable RFC trace ( 0 or 1 ). cpicTrace Enable/disable CPIC trace [0..3] . 55.3.3. Server Configuration The configurations for servers are maintained in the serverDataStore property of the SAP component. Each entry in this map configures a distinct inbound connection from an SAP instance. The key for each entry is the name of the outbound connection and is used in the serverName component of a server endpoint URI as described in the URI format section. The value for each entry is a server data configuration object , org.fusesource.camel.component.sap.model.rfc.impl.ServerDataImpl , which defines the configuration of an inbound SAP connection. Sample server configuration The following Blueprint XML code shows how to create a sample server configuration with the name, quickstartServer . Notice how this example also configures a destination connection, quickstartDest , which the server uses to retrieve metadata from a remote SAP instance. This destination is configured in the server data through the repositoryDestination option. If you do not configure this option, you must create a local metadata repository instead. For example, after configuring the destination as shown in the preceding Blueprint XML file, you could handle the BAPI_FLCUST_GETLIST remote function call from an invoking client, using the following URI: 55.3.3.1. Required options The required options for the server data configuration object are, as follows: Name Default Value Description gwhost Gateway host on which the server connection should be registered. gwserv Gateway service, which is the port on which a registration can be done. In order to resolve the service names sapgwXXX , a lookup in etc/services is performed by the network layer of the operating system. If using port numbers instead of symbolic service names, no lookups are performed and no additional entries are needed. progid The program ID with which the registration is done. Serves as an identifier on the gateway and in the destination in the ABAP system. repositoryDestination Specifies a destination name that the server can use in order to retrieve metadata from a metadata repository hosted in a remote SAP server. connectionCount The number of connections that should be registered at the gateway. 55.3.3.2. Secure network connection options The secure network connection options for the server data configuration object are as follows: Name Default Value Description sncMode Secure network connection (SNC) mode, 0 (off) or 1 (on). sncQop SNC level of security, 1 to 9 . sncMyname SNC name of your server. Overrides the default SNC name. Typically something like p:CN=JCoServer, O=ACompany, C=EN . sncLib Path to library which provides SNC service. If this property is not provided, the value of the jco.middleware.snc_lib property is used instead. 55.3.3.3. Other options The other options for the server data configuration object are, as follows: Name Default Value Description saprouter SAP router string to use for a system protected by a firewall, which can therefore only be reached through a SAProuter, when registering the server at the gateway of that ABAP System. A typical router string is /H/firewall.hostname/H/ . maxStartupDelay The maximum time (in seconds) between two start-up attempts in case of failures. The waiting time is doubled from initially 1 second after each start-up failure until either the maximum value is reached or the server could be started successfully. trace Enable/disable RFC trace ( 0 or 1 ) workerThreadCount The maximum number of threads used by the server connection. If not set, the value for the connectionCount is used as the workerThreadCount . The maximum number of threads can not exceed 99. workerThreadMinCount The minimum number of threads used by server connection. If not set, the value for connectionCount is used as the workerThreadMinCount . 55.3.4. Repository Configuration The configurations for repositories are maintained in the repositoryDataStore property of the SAP Component. Each entry in this map configures a distinct repository. The key for each entry is the name of the repository and this key also corresponds to the name of the server to which this repository is attached. The value of each entry is a repository data configuration object, org.fusesource.camel.component.sap.model.rfc.impl.RepositoryDataImpl , that defines the contents of a metadata repository. A repository data object is a map of function template configuration objects, org.fuesource.camel.component.sap.model.rfc.impl.FunctionTemplateImpl . Each entry in this map specifies the interface of a function module and the key for each entry is the name of the function module specified. Repository data example The following code shows a simple example of configuring a metadata repository: 55.3.4.1. Function template properties The interface of a function module consists of four parameter lists by which data is transferred back and forth to the function module in an RFC call. Each parameter list consists of one or more fields, each of which is a named parameter transferred in an RFC call. The following parameter lists and exception list are supported: The import parameter list contains parameter values sent to a function module in an RFC call; The export parameter list contains parameter values that are returned by a function module in an RFC call; The changing parameter list contains parameter values sent to and returned by a function module in an RFC call; The table parameter list contains internal table values sent to and returned by a function module in an RFC call. The interface of a function module also consists of an exception list of ABAP exceptions that may be raised when the module is invoked in an RFC call. A function template describes the name and type of parameters in each parameter list of a function interface and the ABAP exceptions thrown by the function. A function template object maintains five property lists of metadata objects, as described in the following table. Property Description importParameterList A list of list field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMeataDataImpl . Specifies the parameters sent in an RFC call to a function module. changingParameterList A list of list field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMeataDataImpl . Specifies the parameters sent and returned in an RFC call to and from a function module. exportParameterList A list of list field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMeataDataImpl . Specifies the parameters returned in an RFC call from a function module. tableParameterList A list of list field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMeataDataImpl . Specifies the table parameters that are sent and returned in an RFC call to and from a function module. exceptionList A list of ABAP exception metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.AbapExceptionImpl . Specifies the ABAP exceptions potentially raised in an RFC call of the function module. Function template example The following example shows an outline of how to configure a function template: 55.3.4.2. List field metadata properties A list field metadata object, org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMeataDataImpl , specifies the name and type of a field in a parameter list. For an elementary parameter field ( CHAR , DATE , BCD , TIME , BYTE , NUM , FLOAT , INT , INT1 , INT2 , DECF16 , DECF34 , STRING , XSTRING ), the following table lists the configuration properties that may be set on a list field metadata object: Name Default Value Description name - The name of the parameter field. type - The parameter type of the field. byteLength - The field length in bytes for a non-Unicode layout. This value depends on the parameter type. unicodeByteLength - The field length in bytes for a Unicode layout. This value depends on the parameter type. decimals 0 The number of decimals in field value. Required for parameter types BCD and FLOAT. optional false If true , the field is optional and need not be set in an RFC call. Note that all elementary parameter fields require that the name , type , byteLength , and unicodeByteLength properties be specified in the field metadata object. In addition, the BCD , FLOAT , DECF16 , and DECF34 fields require the decimal property to be specified in the field metadata object. For a complex parameter field of type TABLE or STRUCTURE , the following table lists the configuration properties that may be set on a list field metadata object: Name Default Value Description name - The name of the parameter field. type - The parameter type of the field. recordMetaData - The metadata for the structure or table. A record metadata object, org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl , is passed to specify the fields in the structure or table rows. optional false If true , the field is optional and need not be set in a RFC call. Note All complex parameter fields require that the name , type , and recordMetaData properties be specified in the field metadata object. The value of the recordMetaData property is a record field metadata object, org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl , which specifies the structure of a nested structure or the structure of a table row. Elementary list field metadata example The following metadata configuration specifies an optional, 24-digit packed BCD number parameter with two decimal places named TICKET_PRICE : Complex list field metadata example The following metadata configuration specifies a required TABLE parameter named CONNINFO with a row structure specified by the connectionInfo record metadata object: 55.3.4.3. Record metadata properties A record metadata object, org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl , specifies the name and contents of a nested STRUCTURE or the row of a TABLE parameter. A record metadata object maintains a list of record field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.FieldMetaDataImpl , which specifies the parameters that reside in the nested structure or table row. The following table lists configuration properties that may be set on a record metadata object: Name Default Value Description name - The name of the record. recordFieldMetaData - The list of record field metadata objects, org.fusesource.camel.component.sap.model.rfc.impl.FieldMetaDataImpl . Specifies the fields contained within the structure. Note All properties of the record metadata object are required. Record metadata example The following example shows how to configure a record metadata object: 55.3.4.4. Record field metadata properties A record field metadata object, org.fusesource.camel.component.sap.model.rfc.impl.FieldMetaDataImpl , specifies the name and type of a parameter field within a structure. A record field metadata object is similar to a parameter field metadata object, except that the offsets of the individual field locations within the nested structure or table row must be additionally specified. The non-Unicode and Unicode offsets of an individual field must be calculated and specified from the sum of non-Unicode and Unicode byte lengths of the preceding fields in the structure or row. Note The failure to properly specify the offsets of fields in nested structures and table rows will cause the field storage of parameters in the underlying JCo and ABAP runtimes to overlap and prevent the proper transfer of values in RFC calls. For an elementary parameter field ( CHAR , DATE , BCD , TIME , BYTE , NUM , FLOAT , INT , INT1 , INT2 , DECF16 , DECF34 , STRING , XSTRING ), the following table lists the configuration properties that may be set on a record field metadata object: Name Default Value Description name - The name of the parameter field. type - The parameter type of the field. byteLength - The field length in bytes for a non-Unicode layout. This value depends on the parameter type. unicodeByteLength - The field length in bytes for a Unicode layout. This value depends on the parameter type. byteOffset - The field offset in bytes for non-Unicode layout. This offset is the byte location of the field within the enclosing structure. unicodeByteOffset - The field offset in bytes for Unicode layout. This offset is the byte location of the field within the enclosing structure. decimals 0 The number of decimals in field value; only required for parameter types BCD and FLOAT . For a complex parameter field of type TABLE or STRUCTURE , the following table lists the configuration properties that may be set on a record field metadata object: Name Default Value Description name - The name of the parameter field. type - The parameter type of the field. byteOffset - The field offset in bytes for non-Unicode layout. This offset is the byte location of the field within the enclosing structure. unicodeByteOffset - The field offset in bytes for Unicode layout. This offset is the byte location of the field within the enclosing structure. recordMetaData - The metadata for the structure or table. A record metadata object, org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl , is passed to specify the fields in the structure or table rows. Elementary record field metadata example The following metadata configuration specifies a DATE field parameter named ARRDATE located 85 bytes into the enclosing structure in the case of a non-Unicode layout and located 170 bytes into the enclosing structure in the case of a Unicode layout. Complex record field metadata example The following metadata configuration specifies a STRUCTURE field parameter named FLTINFO with a structure specified by the flightInfo record metadata object. The parameter is located at the beginning of the enclosing structure in both the case of a non-Unicode and Unicode layout. 55.4. Message Headers The SAP component supports the following message headers: Header Description CamelSap.scheme The URI scheme of the last endpoint to process the message. Use one of the following values: sap-srfc-destination sap-trfc-destination sap-qrfc-destination sap-srfc-server sap-trfc-server sap-idoc-destination sap-idoclist-destination sap-qidoc-destination sap-qidoclist-destination sap-idoclist-server CamelSap.destinationName The destination name of the last destination endpoint to process the message. CamelSap.serverName The server name of the last server endpoint to process the message. CamelSap.queueName The queue name of the last queuing endpoint to process the message. CamelSap.rfcName The RFC name of the last RFC endpoint to process the message. CamelSap.idocType The IDoc type of the last IDoc endpoint to process the message. CamelSap.idocTypeExtension The IDoc type extension, if any, of the last IDoc endpoint to process the message. CamelSap.systemRelease The system release, if any, of the last IDoc endpoint to process the message. CamelSap.applicationRelease The application release, if any, of the last IDoc endpoint to process the message. 55.5. Exchange Properties The SAP component adds the following exchange properties: Property Description CamelSap.destinationPropertiesMap A map containing the properties of each SAP destination encountered by the exchange. The map is keyed by destination name and each entry is a java.util.Properties object containing the configuration properties of that destination. CamelSap.serverPropertiesMap A map containing the properties of each SAP server encountered by the exchange. The map is keyed by server name and each entry is a java.util.Properties object containing the configuration properties of that server. 55.6. Message Body for RFC 55.6.1. Request and response objects An SAP endpoint expects to receive a message with a message body containing an SAP request object and will return a message with a message body containing an SAP response object. SAP requests and responses are fixed map data structures containing named fields with each field having a predefined data type. Note that the named fields in an SAP request and response are specific to an SAP endpoint, with each endpoint defining the parameters in the SAP request and response it will accept. An SAP endpoint provides factory methods to create the request and response objects that are specific to it. 55.6.2. Structure objects Both SAP request and response objects are represented in Java as a structure object which supports the org.fusesource.camel.component.sap.model.rfc.Structure interface. This interface extends both the java.util.Map and org.eclipse.emf.ecore.EObject interfaces. The field values in a structure object are accessed through the field's getter methods in the map interface. In addition, the structure interface provides a type-restricted method to retrieve field values. Structure objects are implemented in the component runtime using the Eclipse Modeling Framework (EMF) and support that framework's EObject interface. Instances of a structure object have attached metadata which define and restrict the structure and contents of the map of fields it provides. This metadata can be accessed and introspected using the standard methods provided by EMF. Please refer to the EMF documentation for further details. Note Attempts to get a parameter not defined on a structure object will return null. Attempts to set a parameter not defined on a structure will throw an exception as well as attempts to set the value of a parameter with an incorrect type. As discussed in the following sections, structure objects can contain fields that contain values of the complex field types, STRUCTURE , and TABLE . Note It is unnecessary to create instances of these types and add them to the structure. Instances of these field values are created on demand if necessary when accessed in the enclosing structure. 55.6.3. Field types The fields that reside within the structure object of an SAP request or response may be either elementary or complex . An elementary field contains a single scalar value, whereas a complex field will contain one or more fields of either an elementary or complex type. 55.6.3.1. Elementary field types An elementary field may be a character, numeric, hexadecimal or string field type. The following table summarizes the types of elementary fields that may reside in a structure object: Field Type Corresponding Java Type Byte Length Unicode Byte Length Number Decimals Digits Description CHAR java.lang.String 1 to 65535 1 to 65535 - ABAP Type 'C': Fixed sized character string DATE java.util.Date 8 16 - ABAP Type 'D': Date (format: YYYYMMDD) BCD java.math.BigDecimal 1 to 16 1 to 16 0 to 14 ABAP Type 'P': Packed BCD number. A BCD number contains two digits per byte. TIME java.util.Date 6 12 - ABAP Type 'T': Time (format: HHMMSS) BYTE byte[] 1 to 65535 1 to 65535 - ABAP Type 'X':Fixed sized byte array NUM java.lang.String 1 to 65535 1 to 65535 - ABAP Type 'N': Fixed sized numeric character string FLOAT java.lang.Double 8 8 0 to 15 ABAP Type 'F': Floating point number INT java.lang.Integer 4 4 - ABAP Type 'I': 4-byte Integer INT2 java.lang.Integer 2 2 - ABAP Type 'S': 2-byte Integer INT1 java.lang.Integer 1 1 - ABAP Type 'B': 1-byte Integer DECF16 java.match.BigDecimal 8 8 16 ABAP Type 'decfloat16': 8 -byte Decimal Floating Point Number DECF34 java.math.BigDecimal 16 16 34 ABAP Type 'decfloat34': 16-byte Decimal Floating Point Number STRING java.lang.String 8 8 - ABAP Type 'G': Variable length character string XSTRING byte[] 8 8 - ABAP Type 'Y': Variable length byte array 55.6.3.2. Character field types A character field contains a fixed sized character string that may use either a non-Unicode or Unicode character encoding in the underlying JCo and ABAP runtimes. Non-Unicode character strings encode one character per byte. Unicode character strings are encoded in two bytes using UTF-16 encoding. Character field values are represented in Java as java.lang.String objects and the underlying JCo runtime is responsible for the conversion to their ABAP representation. A character field declares its field length in its associated byteLength and unicodeByteLength properties, which determine the length of the field's character string in each encoding system. CHAR A CHAR character field is a text field containing alphanumeric characters and corresponds to the ABAP type C. NUM A NUM character field is a numeric text field containing numeric characters only and corresponds to the ABAP type N. DATE A DATE character field is an 8 character date field with the year, month and day formatted as YYYYMMDD and corresponds to the ABAP type D. TIME A TIME character field is a 6 character time field with the hours, minutes and seconds formatted as HHMMSS and corresponds to the ABAP type T. 55.6.3.3. Numeric field types A numeric field contains a number. The following numeric field types are supported: INT An INT numeric field is an integer field stored as a 4-byte integer value in the underlying JCo and ABAP runtimes and corresponds to the ABAP type I. An INT field value is represented in Java as a java.lang.Integer object. INT2 An INT2 numeric field is an integer field stored as a 2-byte integer value in the underlying JCo and ABAP runtimes and corresponds to the ABAP type S. An INT2 field value is represented in Java as a java.lang.Integer object. INT1 An INT1 field is an integer field stored as a 1-byte integer value in the underlying JCo and ABAP runtimes value and corresponds to the ABAP type B. An INT1 field value is represented in Java as a java.lang.Integer object. FLOAT A FLOAT field is a binary floating point number field stored as an 8-byte double value in the underlying JCo and ABAP runtimes and corresponds to the ABAP type F. A FLOAT field declares the number of decimal digits that the field's value contains in its associated decimal property. In the case of a FLOAT field, this decimal property can have a value between 1 and 15 digits. A FLOAT field value is represented in Java as a java.lang.Double object. BCD A BCD field is a binary coded decimal field stored as a 1 to 16 byte packed number in the underlying JCo and ABAP runtimes and corresponds to the ABAP type P. A packed number stores two decimal digits per byte. A BCD field declares its field length in its associated byteLength and unicodeByteLength properties. In the case of a BCD field, these properties can have a value between 1 and 16 bytes, and both properties will have the same value. A BCD field declares the number of decimal digits that the field's value contains in its associated decimal property. In the case of a BCD field, this decimal property can have a value between 1 and 14 digits. A BCD field value is represented in Java as a java.math.BigDecimal . DECF16 A DECF16 field is a decimal floating point stored as an 8-byte IEEE 754 decimal64 floating point value in the underlying JCo and ABAP runtimes and corresponds to the ABAP type decfloat16 . The value of a DECF16 field has 16 decimal digits. The value of a DECF16 field is represented in Java as java.math.BigDecimal . DECF34 A DECF34 field is a decimal floating point stored as a 16-byte IEEE 754 decimal128 floating point value in the underlying JCo and ABAP runtimes and corresponds to the ABAP type decfloat34 . The value of a DECF34 field has 34 decimal digits. The value of a DECF34 field is represented in Java as java.math.BigDecimal . 55.6.3.4. Hexadecimal field types A hexadecimal field contains raw binary data. The following hexadecimal field types are supported: BYTE A BYTE field is a fixed sized byte string stored as a byte array in the underlying JCo and ABAP runtimes and corresponds to the ABAP type X. A BYTE field declares its field length in its associated byteLength and unicodeByteLength properties. In the case of a BYTE field, these properties can have a value between 1 and 65535 bytes and both properties will have the same value. The value of a BYTE field is represented in Java as a byte[] object. 55.6.3.5. String field types A string field references a variable length string value. The length of that string value is not fixed until runtime. The storage for the string value is dynamically created in the underlying JCo and ABAP runtimes. The storage for the string field itself is fixed and contains only a string header. STRING A STRING field refers to a character string stored in the underlying JCo and ABAP runtimes as an 8-byte value. It corresponds to the ABAP type G. The value of the STRING field is represented in Java as a java.lang.String object. XSTRING An XSTRING field refers to a byte string stored in the underlying JCo and ABAP runtimes as an 8-byte value. It corresponds to the ABAP type Y. The value of the STRING field is represented in Java as a byte[] object. 55.6.3.6. Complex field types A complex field may be either a structure or table field type. The following table summarizes these complex field types. Field Type Corresponding Java Type Byte Length Unicode Byte Length Number Decimals Digits Description STRUCTURE org.fusesource.camel.component.sap.model.rfc.Structure Total of individual field byte lengths Total of individual field Unicode byte lengths - ABAP Type 'u' & 'v': Heterogeneous Structure TABLE org.fusesource.camel.component.sap.model.rfc.Table Byte length of row structure Unicode byte length of row structure - ABAP Type 'h': Table 55.6.3.7. Structure field types A STRUCTURE field contains a structure object and is stored in the underlying JCo and ABAP runtimes as an ABAP structure record. It corresponds to either an ABAP type u or v . The value of a STRUCTURE field is represented in Java as a structure object with the interface org.fusesource.camel.component.sap.model.rfc.Structure . 55.6.3.8. Table field types A TABLE field contains a table object and is stored in the underlying JCo and ABAP runtimes as an ABAP internal table. It corresponds to the ABAP type h . The value of the field is represented in Java by a table object with the interface org.fusesource.camel.component.sap.model.rfc.Table . 55.6.3.9. Table objects A table object is a homogeneous list data structure containing rows of structure objects with the same structure. This interface extends both the java.util.List and org.eclipse.emf.ecore.EObject interfaces. The list of rows in a table object is accessed and managed using the standard methods defined in the list interface. In addition, the table interface provides two factory methods for creating and adding structure objects to the row list. Table objects are implemented in the component runtime using the Eclipse Modeling Framework (EMF) and support that framework's EObject interface. Instances of a table object have attached metadata which define and restrict the structure and contents of the rows it provides. This metadata can be accessed and introspected using the standard methods provided by EMF. Please refer to the EMF documentation for further details. Note Attempts to add or set a row structure value of the wrong type will throw an exception. 55.7. Message Body for IDoc 55.7.1. IDoc message type When using one of the IDoc Camel SAP endpoints, the type of the message body depends on which particular endpoint you are using. For a sap-idoc-destination endpoint or a sap-qidoc-destination endpoint, the message body is of Document type: For a sap-idoclist-destination endpoint, a sap-qidoclist-destination endpoint, or a sap-idoclist-server endpoint, the message body is of DocumentList type: 55.7.2. The IDoc document model For the Camel SAP component, an IDoc document is modeled using the Eclipse Modeling Framework (EMF), which provides a wrapper API around the underlying SAP IDoc API. The most important types in this model are: The Document type represents an IDoc document instance. In outline, the Document interface exposes the following methods: The following kinds of method are exposed by the Document interface: Methods for accessing the control record Most of the methods are for accessing or modifying field values of the IDoc control record. These methods are of the form AttributeName , where AttributeName is the name of a field value. Method for accessing the document contents The getRootSegment method provides access to the document contents (IDoc data records), returning the contents as a Segment object. Each Segment object can contain an arbitrary number of child segments, and the segments can be nested to an arbitrary degree. Note, however, that the precise layout of the segment hierarchy is defined by the particular IDoc type of the document. When creating (or reading) a segment hierarchy, therefore, you must be sure to follow the exact structure as defined by the IDoc type. The Segment type is used to access the data records of the IDoc document, where the segments are laid out in accordance with the structure defined by the document's IDoc type. In outline, the Segment interface exposes the following methods: The getChildren(String segmentType) method is particularly useful for adding new (nested) children to a segment. It returns an object of type, SegmentList , which is defined as follows: Hence, to create a data record of E1SCU_CRE type, you could use Java code like the following: 55.7.3. How an IDoc is related to a Document object According to the SAP documentation, an IDoc document consists of the following main parts: Control record The control record (which contains the metadata for the IDoc document) is represented by the attributes on the Document object. Data records The data records are represented by the Segment objects, which are constructed as a nested hierarchy of segments. You can access the root segment through the Document.getRootSegment method. Status records In the Camel SAP component, the status records are not represented by the document model. But you do have access to the latest status value through the status attribute on the control record. Example of creating a Document instance The following example shows how to create an IDoc document with the IDoc type, FLCUSTOMER_CREATEFROMDATA01 , using the IDoc model API in Java. Example 55.1. Creating an IDoc Document in Java 55.8. Document attributes IDoc Document Attributes table shows the control record attributes that you can set on the Document object. Table 55.2. IDoc Document Attributes Attribute Length SAP Field Description archiveKey 70 ARCKEY EDI archive key client 3 MANDT Client creationDate 8 CREDAT Date IDoc was created creationTime 6 CRETIM Time IDoc was created direction 1 DIRECT Direction eDIMessage 14 REFMES Reference to message eDIMessageGroup 14 REFGRP Reference to message group eDIMessageType 6 STDMES EDI message type eDIStandardFlag 1 STD EDI standard eDIStandardVersion 6 STDVRS Version of EDI standard eDITransmissionFile 14 REFINT Reference to interchange file iDocCompoundType 8 DOCTYP IDoc type iDocNumber 16 DOCNUM IDoc number iDocSAPRelease 4 DOCREL SAP Release of IDoc iDocType 30 IDOCTP Name of basic IDoc type iDocTypeExtension 30 CIMTYP Name of extension type messageCode 3 MESCOD Logical message code messageFunction 3 MESFCT Logical message function messageType 30 MESTYP Logical message type outputMode 1 OUTMOD Output mode recipientAddress 10 RCVSAD Receiver address (SADR) recipientLogicalAddress 70 RCVLAD Logical address of receiver recipientPartnerFunction 2 RCVPFC Partner function of receiver recipientPartnerNumber 10 RCVPRN Partner number of receiver recipientPartnerType 2 RCVPRT Partner type of receiver recipientPort 10 RCVPOR Receiver port (SAP System, EDI subsystem) senderAddress SNDSAD Sender address (SADR) senderLogicalAddress 70 SNDLAD Logical address of sender senderPartnerFunction 2 SNDPFC Partner function of sender senderPartnerNumber 10 SNDPRN Partner number of sender senderPartnerType 2 SNDPRT Partner type of sender senderPort 10 SNDPOR Sender port (SAP System, EDI subsystem) serialization 20 SERIAL EDI/ALE: Serialization field status 2 STATUS Status of IDoc testFlag 1 TEST Test flag 55.8.1. Setting document attributes in Java When setting the control record attributes in Java, the usual convention for Java bean properties is followed. That is, a name attribute can be accessed through the getName and setName methods, for getting and setting the attribute value. For example, the iDocType , iDocTypeExtension , and messageType attributes can be set as follows on a Document object: 55.8.2. Setting document attributes in XML When setting the control record attributes in XML, the attributes must be set on the idoc:Document element. For example, the iDocType , iDocTypeExtension , and messageType attributes can be set as follows: 55.9. Transaction Support 55.9.1. BAPI transaction model The SAP Component supports the BAPI transaction model for outbound communication with SAP. A destination endpoint with a URL containing the transacted option set to true will, if necessary, initiate a stateful session on the outbound connection of the endpoint and register a Camel Synchronization object with the exchange. This synchronization object will call the BAPI service method BAPI_TRANSACTION_COMMIT and end the stateful session when the processing of the message exchange is complete. If the processing of the message exchange fails, the synchronization object will call the BAPI server method BAPI_TRANSACTION_ROLLBACK and end the stateful session. 55.9.2. RFC transaction model The tRFC protocol accomplishes an AT-MOST-ONCE delivery and processing guarantee by identifying each transactional request with a unique transaction identifier (TID). A TID accompanies each request sent in the protocol. A sending application using the tRFC protocol must identify each instance of a request with a unique TID when sending the request. An application may send a request with a given TID multiple times, but the protocol ensures that the request is delivered and processed in the receiving system at most once. An application may choose to resend a request with a given TID when encountering a communication or system error when sending the request, and is thus in doubt as to whether that request was delivered and processed in the receiving system. By resending a request when encountering a communication error, a client application using the tRFC protocol can thus ensure EXACTLY-ONCE delivery and processing guarantees for its request. 55.9.3. Which transaction model to use? A BAPI transaction is an application level transaction, in the sense that it imposes ACID guarantees on the persistent data changes performed by a BAPI method or RFC function in the SAP database. An RFC transaction is a communication transaction, in the sense that it imposes delivery guarantees (AT-MOST-ONCE, EXACTLY-ONCE, EXACTLY-ONCE-IN-ORDER) on requests to a BAPI method and/or RFC function. 55.9.4. Transactional RFC destination endpoints The following destination endpoints support RFC transactions: sap-trfc-destination sap-qrfc-destination A single Camel route can include multiple transactional RFC destination endpoints, sending messages to multiple RFC destinations and even sending messages to the same RFC destination multiple times. This implies that the Camel SAP component potentially needs to keep track of many transaction IDs (TIDs) for each Exchange object passing along a route. Now if the route processing fails and must be retried, the situation gets quite complicated. The RFC transaction semantics demand that each RFC destination along the route must be invoked using the same TID that was used the first time around (and where the TIDs for each destinations are distinct from each other). In other words, the Camel SAP component must keep track of which TID was used at which point along the route, and remember this information, so that the TIDs can be replayed in the correct order. By default, Camel does not provide a mechanism that enables an Exchange to know where it is in a route. To provide such a mechanism, it is necessary to install the CurrentProcessorDefinitionInterceptStrategy interceptor into the Camel runtime. This interceptor must be installed into the Camel runtime, in order for the Camel SAP component to keep track of the TIDs in a route. 55.9.5. Transactional RFC server endpoints The following server endpoints support RFC transactions: sap-trfc-server When a Camel exchange processing a transactional request encounters a processing error, Camel handles the processing error through its standard error handling mechanisms. If the Camel route processing the exchange is configured to propagate the error back to the caller, the SAP server endpoint that initiated the exchange takes note of the failure and the sending SAP system is notified of the error. The sending SAP system can then respond by sending another transaction request with the same TID to process the request again. 55.10. XML Serialization for RFC SAP request and response objects support an XML serialization format which enable these objects to be serialized to and from an XML document. 55.10.1. XML namespace Each RFC in a repository defines a specific XML namespace for the elements which compose the serialized forms of its Request and Response objects. The form of this namespace URL is as follows: RFC namespace URLs have a common http://sap.fusesource.org/rfc prefix followed by the name of the repository in which the RFC's metadata is defined. The final component in the URL is the name of the RFC itself. 55.10.2. Request and response XML documents An SAP request object will be serialized into an XML document with the root element of that document named Request and scoped by the namespace of the request's RFC. An SAP response object will be serialized into an XML document with the root element of that document named Response and scoped by the namespace of the response's RFC. 55.10.3. Structure fields Structure fields in parameter lists or nested structures are serialized as elements. The element name of the serialized structure corresponds to the field name of the structure within the enclosing parameter list, structure or table row entry it resides. Note that the type name of the structure element in the RFC namespace will correspond to the name of the record metadata object which defines the structure, as in the following example: This distinction will be important when specifying a JAXB bean to marshal and unmarshal the structure. 55.10.4. Table fields Table fields in parameter lists or nested structures are serialized as elements. The element name of the serialized structure will correspond to the field name of the table within the enclosing parameter list, structure, or table row entry it resides. The table element will contain a series of row elements to hold the serialized values of the table's row entries. Note that the type name of the table element in the RFC namespace corresponds to the name of the record metadata object which defines the row structure of the table suffixed by _TABLE . The type name of the table row element in the RFC name corresponds to the name of the record metadata object which defines the row structure of the table, as in the following example: This distinction will be important when specifying a JAXB bean to marshal and unmarshal the structure. 55.10.5. Elementary fields Elementary fields in parameter lists or nested structures are serialized as attributes on the element of the enclosing parameter list or structure. The attribute name of the serialized field corresponds to the field name of the field within the enclosing parameter list, structure, or table row entry it resides, as in the following example: 55.10.6. Date and time formats Date and Time fields are serialized into attribute values using the following format: Date fields will be serialized with only the year, month, day and timezone components set: Time fields will be serialized with only the hour, minute, second, millisecond and timezone components set: 55.11. XML Serialization for IDoc An IDoc message body can be serialized into an XML string format, with the help of a built-in type converter. 55.11.1. XML namespace Each serialized IDoc is associated with an XML namespace, which has the following general format: Both the repositoryName (name of the remote SAP metadata repository) and the idocType (IDoc document type) are mandatory, but the other components of the namespace can be left blank. For example, you could have an XML namespace like the following: 55.11.2. Built-in type converter The Camel SAP component has a built-in type converter, which is capable of converting a Document object or a DocumentList object to and from a String type. For example, to serialize a Document object to an XML string, you can simply add the following line to a route in XML DSL: You can also use this approach to a serialized XML message into a Document object. For example, given that the current message body is a serialized XML string, you can convert it back into a Document object by adding the following line to a route in XML DSL: 55.11.3. Sample IDoc message body in XML format When you convert an IDoc message to a String , it is serialized into an XML document, where the root element is either idoc:Document (for a single document) or idoc:DocumentList (for a list of documents). It shows that a single IDoc document that has been serialized to an idoc:Document element. Example 55.2. IDoc Message Body in XML 55.12. Example 1: Reading Data from SAP This example demonstrates a route that reads FlightCustomer business object data from SAP. The route invokes the FlightCustomer BAPI method, BAPI_FLCUST_GETLIST , using an SAP synchronous RFC destination endpoint to retrieve the data. 55.12.1. Java DSL for route The Java DSL for the example route is as follows: 55.12.2. XML DSL for route And the Spring DSL for the same route is as follows: 55.12.3. createFlightCustomerGetListRequest bean The createFlightCustomerGetListRequest bean is responsible for building an SAP request object in its exchange method that is used in the RFC call of the subsequent SAP endpoint. The following code snippet demonstrates the sequence of operations to build the request object: 55.12.4. returnFlightCustomerInfo bean The returnFlightCustomerInfo bean is responsible for extracting data from the SAP response object in its exchange method that it receives from the SAP endpoint. The following code snippet demonstrates the sequence of operations to extract the data from the response object: 55.13. Example 2: Writing Data to SAP This example demonstrates a route that creates a FlightTrip business object instance in SAP. The route invokes the FlightTrip BAPI method, BAPI_FLTRIP_CREATE , using a destination endpoint to create the object. 55.13.1. Java DSL for route The Java DSL for the example route is as follows: 55.13.2. XML DSL for route And the Spring DSL for the same route is as follows: 55.13.3. Transaction support Note that the URL for the SAP endpoint has the transacted option set to true . When this option is enabled, the endpoint ensures that an SAP transaction session has been initiated before invoking the RFC call. Because this endpoint's RFC creates new data in SAP, this option is necessary to make the route's changes permanent in SAP. 55.13.4. Populating request parameters The createFlightTripRequest and returnFlightTripResponse beans are responsible for populating request parameters into the SAP request and extracting response parameters from the SAP response respectively, following the same sequence of operations as demonstrated in the example. 55.14. Example 3: Handling Requests from SAP This example demonstrates a route which handles a request from SAP to the BOOK_FLIGHT RFC, which is implemented by the route. In addition, it demonstrates the component's XML serialization support, using JAXB to unmarshal and marshal SAP request objects and response objects to custom beans. This route creates a FlightTrip business object on behalf of a travel agent, FlightCustomer . The route first unmarshals the SAP request object received by the SAP server endpoint into a custom JAXB bean. This custom bean is then multicasted in the exchange to three sub-routes, which gather the travel agent, flight connection, and passenger information required to create the flight trip. The final sub-route creates the flight trip object in SAP, as demonstrated in the example. The final sub-route also creates and returns a custom JAXB bean which is marshaled into an SAP response object and returned by the server endpoint. 55.14.1. Java DSL for route The Java DSL for the example route is as follows: 55.14.2. XML DSL for route And the XML DSL for the same route is as follows: 55.14.3. BookFlightRequest bean The following listing illustrates a JAXB bean which unmarshals from the serialized form of an SAP BOOK_FLIGHT request object: 55.14.4. BookFlightResponse bean The following listing illustrates a JAXB bean which marshals to the serialized form of an SAP BOOK_FLIGHT response object: Note The complex parameter fields of the response object are serialized as child elements of the response. 55.14.5. FlightInfo bean The following listing illustrates a JAXB bean which marshals to the serialized form of the complex structure parameter FLTINFO : 55.14.6. ConnectionInfoTable bean The following listing illustrates a JAXB bean which marshals to the serialized form of the complex table parameter, CONNINFO . Note that the name of the root element type of the JAXB bean corresponds to the name of the row structure type suffixed with _TABLE and the bean contains a list of row elements. 55.14.7. ConnectionInfo bean The following listing illustrates a JAXB bean, which marshals to the serialized form of the above tables row elements: | [
"<dependency> <groupId>org.fusesource</groupId> <artifactId>camel-sap-starter</artifactId> <version>3.20.1.redhat-00056</version> </dependency>",
"sap-srfc-destination: destinationName : rfcName sap-trfc-destination: destinationName : rfcName sap-qrfc-destination: destinationName : queueName : rfcName sap-srfc-server: serverName : rfcName [? options ] sap-trfc-server: serverName : rfcName [? options ]",
"sap-idoc-destination: destinationName : idocType [: idocTypeExtension [: systemRelease [: applicationRelease ]]] sap-idoclist-destination: destinationName : idocType [: idocTypeExtension [: systemRelease [: applicationRelease ]]] sap-qidoc-destination: destinationName : queueName : idocType [: idocTypeExtension [: systemRelease [: applicationRelease ]]] sap-qidoclist-destination: destinationName : queueName : idocType [: idocTypeExtension [: systemRelease [: applicationRelease ]]] sap-idoclist-server: serverName : idocType [: idocTypeExtension [: systemRelease [: applicationRelease ]]][? options ]",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint ... > <!-- Configures the Inbound and Outbound SAP Connections --> <bean id=\"sap-configuration\" class=\"org.fusesource.camel.component.sap.SapConnectionConfiguration\"> <property name=\"destinationDataStore\"> <map> <entry key=\"quickstartDest\" value-ref=\"quickstartDestinationData\" /> </map> </property> <property name=\"serverDataStore\"> <map> <entry key=\"quickstartServer\" value-ref=\"quickstartServerData\" /> </map> </property> </bean> <!-- Configures an Outbound SAP Connection --> <!-- *** Please enter the connection property values for your environment *** --> <bean id=\"quickstartDestinationData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl\"> <property name=\"ashost\" value=\"example.com\" /> <property name=\"sysnr\" value=\"00\" /> <property name=\"client\" value=\"000\" /> <property name=\"user\" value=\"username\" /> <property name=\"passwd\" value=\"passowrd\" /> <property name=\"lang\" value=\"en\" /> </bean> <!-- Configures an Inbound SAP Connection --> <!-- *** Please enter the connection property values for your environment ** --> <bean id=\"quickstartServerData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.ServerDataImpl\"> <property name=\"gwhost\" value=\"example.com\" /> <property name=\"gwserv\" value=\"3300\" /> <!-- The following property values should not be changed --> <property name=\"progid\" value=\"QUICKSTART\" /> <property name=\"repositoryDestination\" value=\"quickstartDest\" /> <property name=\"connectionCount\" value=\"2\" /> </bean> </blueprint>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint ... > <!-- Create interceptor to support tRFC processing --> <bean id=\"currentProcessorDefinitionInterceptor\" class=\"org.fusesource.camel.component.sap.CurrentProcessorDefinitionInterceptStrategy\" /> <!-- Configures the Inbound and Outbound SAP Connections --> <bean id=\"sap-configuration\" class=\"org.fusesource.camel.component.sap.SapConnectionConfiguration\"> <property name=\"destinationDataStore\"> <map> <entry key=\"quickstartDest\" value-ref=\"quickstartDestinationData\" /> </map> </property> </bean> <!-- Configures an Outbound SAP Connection --> <!-- *** Please enter the connection property values for your environment *** --> <bean id=\"quickstartDestinationData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl\"> <property name=\"ashost\" value=\"example.com\" /> <property name=\"sysnr\" value=\"00\" /> <property name=\"client\" value=\"000\" /> <property name=\"user\" value=\"username\" /> <property name=\"passwd\" value=\"password\" /> <property name=\"lang\" value=\"en\" /> </bean> </blueprint>",
"sap-srfc-destination:quickstartDest:BAPI_FLCUST_GETLIST",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint ... > <!-- Configures the Inbound and Outbound SAP Connections --> <bean id=\"sap-configuration\" class=\"org.fusesource.camel.component.sap.SapConnectionConfiguration\"> <property name=\"destinationDataStore\"> <map> <entry key=\"quickstartDest\" value-ref=\"quickstartDestinationData\" /> </map> </property> <property name=\"serverDataStore\"> <map> <entry key=\"quickstartServer\" value-ref=\"quickstartServerData\" /> </map> </property> </bean> <!-- Configures an Outbound SAP Connection --> <!-- *** Please enter the connection property values for your environment *** --> <bean id=\"quickstartDestinationData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl\"> <property name=\"ashost\" value=\"example.com\" /> <property name=\"sysnr\" value=\"00\" /> <property name=\"client\" value=\"000\" /> <property name=\"user\" value=\"username\" /> <property name=\"passwd\" value=\"passowrd\" /> <property name=\"lang\" value=\"en\" /> </bean> <!-- Configures an Inbound SAP Connection --> <!-- *** Please enter the connection property values for your environment ** --> <bean id=\"quickstartServerData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.ServerDataImpl\"> <property name=\"gwhost\" value=\"example.com\" /> <property name=\"gwserv\" value=\"3300\" /> <!-- The following property values should not be changed --> <property name=\"progid\" value=\"QUICKSTART\" /> <property name=\"repositoryDestination\" value=\"quickstartDest\" /> <property name=\"connectionCount\" value=\"2\" /> </bean> </blueprint>",
"sap-srfc-server:quickstartServer:BAPI_FLCUST_GETLIST",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint ... > <!-- Configures the sap-srfc-server component --> <bean id=\"sap-configuration\" class=\"org.fusesource.camel.component.sap.SapConnectionConfiguration\"> <property name=\"repositoryDataStore\"> <map> <entry key=\"nplServer\" value-ref=\"nplRepositoryData\" /> </map> </property> </bean> <!-- Configures a Metadata Repository --> <bean id=\"nplRepositoryData\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.RepositoryDataImpl\"> <property name=\"functionTemplates\"> <map> <entry key=\"BOOK_FLIGHT\" value-ref=\"bookFlightFunctionTemplate\" /> </map> </property> </bean> </blueprint>",
"<bean id=\"bookFlightFunctionTemplate\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.FunctionTemplateImpl\"> <property name=\"importParameterList\"> <list> </list> </property> <property name=\"changingParameterList\"> <list> </list> </property> <property name=\"exportParameterList\"> <list> </list> </property> <property name=\"tableParameterList\"> <list> </list> </property> <property name=\"exceptionList\"> <list> </list> </property> </bean>",
"<bean class=\"org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMetaDataImpl\"> <property name=\"name\" value=\"TICKET_PRICE\" /> <property name=\"type\" value=\"BCD\" /> <property name=\"byteLength\" value=\"12\" /> <property name=\"unicodeByteLength\" value=\"24\" /> <property name=\"decimals\" value=\"2\" /> <property name=\"optional\" value=\"true\" /> </bean>",
"<bean class=\"org.fusesource.camel.component.sap.model.rfc.impl.ListFieldMetaDataImpl\"> <property name=\"name\" value=\"CONNINFO\" /> <property name=\"type\" value=\"TABLE\" /> <property name=\"recordMetaData\" ref=\"connectionInfo\" /> </bean>",
"<bean id=\"connectionInfo\" class=\"org.fusesource.camel.component.sap.model.rfc.impl.RecordMetaDataImpl\"> <property name=\"name\" value=\"CONNECTION_INFO\" /> <property name=\"recordFieldMetaData\"> <list> </list> </property> </bean>",
"<bean class=\"org.fusesource.camel.component.sap.model.rfc.impl.FieldMetaDataImpl\"> <property name=\"name\" value=\"ARRDATE\" /> <property name=\"type\" value=\"DATE\" /> <property name=\"byteLength\" value=\"8\" /> <property name=\"unicodeByteLength\" value=\"16\" /> <property name=\"byteOffset\" value=\"85\" /> <property name=\"unicodeByteOffset\" value=\"170\" /> </bean>",
"<bean class=\"org.fusesource.camel.component.sap.model.rfc.impl.FieldMetaDataImpl\"> <property name=\"name\" value=\"FLTINFO\" /> <property name=\"type\" value=\"STRUCTURE\" /> <property name=\"byteOffset\" value=\"0\" /> <property name=\"unicodeByteOffset\" value=\"0\" /> <property name=\"recordMetaData\" ref=\"flightInfo\" /> </bean>",
"public class SAPEndpoint ... { public Structure getRequest() throws Exception; public Structure getResponse() throws Exception; }",
"public interface Structure extends org.eclipse.emf.ecore.EObject, java.util.Map<String, Object> { <T> T get(Object key, Class<T> type); }",
"public interface Table<S extends Structure> extends org.eclipse.emf.ecore.EObject, java.util.List<S> { /** * Creates and adds a table row at the end of the row list */ S add(); /** * Creates and adds a table row at the index in the row list */ S add(int index); }",
"org.fusesource.camel.component.sap.model.idoc.Document",
"org.fusesource.camel.component.sap.model.idoc.DocumentList",
"org.fusesource.camel.component.sap.model.idoc.Document org.fusesource.camel.component.sap.model.idoc.Segment",
"// Java package org.fusesource.camel.component.sap.model.idoc; public interface Document extends EObject { // Access the field values from the IDoc control record String getArchiveKey(); void setArchiveKey(String value); String getClient(); void setClient(String value); // Access the IDoc document contents Segment getRootSegment(); }",
"// Java package org.fusesource.camel.component.sap.model.idoc; public interface Segment extends EObject, java.util.Map<String, Object> { // Returns the value of the '<em><b>Parent</b></em>' reference. Segment getParent(); // Return an immutable list of all child segments <S extends Segment> EList<S> getChildren(); // Returns a list of child segments of the specified segment type. <S extends Segment> SegmentList<S> getChildren(String segmentType); EList<String> getTypes(); Document getDocument(); String getDescription(); String getType(); String getDefinition(); int getHierarchyLevel(); String getIdocType(); String getIdocTypeExtension(); String getSystemRelease(); String getApplicationRelease(); int getNumFields(); long getMaxOccurrence(); long getMinOccurrence(); boolean isMandatory(); boolean isQualified(); int getRecordLength(); <T> T get(Object key, Class<T> type); }",
"// Java package org.fusesource.camel.component.sap.model.idoc; public interface SegmentList<S extends Segment> extends EObject, EList<S> { S add(); S add(int index); }",
"Segment rootSegment = document.getRootSegment(); Segment E1SCU_CRE_Segment = rootSegment.getChildren(\"E1SCU_CRE\").add();",
"// Java import org.fusesource.camel.component.sap.model.idoc.Document; import org.fusesource.camel.component.sap.model.idoc.Segment; import org.fusesource.camel.component.sap.util.IDocUtil; import org.fusesource.camel.component.sap.model.idoc.Document; import org.fusesource.camel.component.sap.model.idoc.DocumentList; import org.fusesource.camel.component.sap.model.idoc.IdocFactory; import org.fusesource.camel.component.sap.model.idoc.IdocPackage; import org.fusesource.camel.component.sap.model.idoc.Segment; import org.fusesource.camel.component.sap.model.idoc.SegmentChildren; // // Create a new IDoc instance using the modeling classes // // Get the SAP Endpoint bean from the Camel context. // In this example, it's a 'sap-idoc-destination' endpoint. SapTransactionalIDocDestinationEndpoint endpoint = exchange.getContext().getEndpoint( \"bean: SapEndpointBeanID \", SapTransactionalIDocDestinationEndpoint.class ); // The endpoint automatically populates some required control record attributes Document document = endpoint.createDocument() // Initialize additional control record attributes document.setMessageType(\"FLCUSTOMER_CREATEFROMDATA\"); document.setRecipientPartnerNumber(\"QUICKCLNT\"); document.setRecipientPartnerType(\"LS\"); document.setSenderPartnerNumber(\"QUICKSTART\"); document.setSenderPartnerType(\"LS\"); Segment rootSegment = document.getRootSegment(); Segment E1SCU_CRE_Segment = rootSegment.getChildren(\"E1SCU_CRE\").add(); Segment E1BPSCUNEW_Segment = E1SCU_CRE_Segment.getChildren(\"E1BPSCUNEW\").add(); E1BPSCUNEW_Segment.put(\"CUSTNAME\", \"Fred Flintstone\"); E1BPSCUNEW_Segment.put(\"FORM\", \"Mr.\"); E1BPSCUNEW_Segment.put(\"STREET\", \"123 Rubble Lane\"); E1BPSCUNEW_Segment.put(\"POSTCODE\", \"01234\"); E1BPSCUNEW_Segment.put(\"CITY\", \"Bedrock\"); E1BPSCUNEW_Segment.put(\"COUNTR\", \"US\"); E1BPSCUNEW_Segment.put(\"PHONE\", \"800-555-1212\"); E1BPSCUNEW_Segment.put(\"EMAIL\", \" [email protected] \"); E1BPSCUNEW_Segment.put(\"CUSTTYPE\", \"P\"); E1BPSCUNEW_Segment.put(\"DISCOUNT\", \"005\"); E1BPSCUNEW_Segment.put(\"LANGU\", \"E\");",
"// Java document.setIDocType(\"FLCUSTOMER_CREATEFROMDATA01\"); document.setIDocTypeExtension(\"\"); document.setMessageType(\"FLCUSTOMER_CREATEFROMDATA\");",
"<?xml version=\"1.0\" encoding=\"ASCII\"?> <idoc:Document iDocType=\"FLCUSTOMER_CREATEFROMDATA01\" iDocTypeExtension=\"\" messageType=\"FLCUSTOMER_CREATEFROMDATA\" ... > </idoc:Document>",
"http://sap.fusesource.org/rfc/<Repository Name>/<RFC Name>",
"<?xml version=\"1.0\" encoding=\"ASCII\"?> <BOOK_FLIGHT:Request xmlns:BOOK_FLIGHT=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\"> </BOOK_FLIGHT:Request>",
"<?xml version=\"1.0\" encoding=\"ASCII\"?> <BOOK_FLIGHT:Response xmlns:BOOK_FLIGHT=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\"> </BOOK_FLIGHT:Response>",
"<BOOK_FLIGHT:FLTINFO xmlns:BOOK_FLIGHT=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\"> </BOOK_FLIGHT:FLTINFO>",
"<xs:schema targetNamespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\"> xmlns:xs=\"http://www.w3.org/2001/XMLSchema\"> <xs:complexType name=\"FLTINFO_STRUCTURE\"> </xs:complexType> </xs:schema>",
"<BOOK_FLIGHT:CONNINFO xmlns:BOOK_FLIGHT=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\"> <row ... > ... </row> <row ... > ... </row> </BOOK_FLIGHT:CONNINFO>",
"<xs:schema targetNamespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\" xmlns:xs=\"http://www.w3.org/2001/XMLSchema\"> <xs:complextType name=\"CONNECTION_INFO_STRUCTURE_TABLE\"> <xs:sequence> <xs:element name=\"row\" minOccures=\"0\" maxOccurs=\"unbounded\" type=\"CONNECTION_INFO_STRUCTURE\"/> <xs:sequence> </xs:sequence> </xs:complexType> <xs:complextType name=\"CONNECTION_INFO_STRUCTURE\"> </xs:complexType> </xs:schema>",
"<?xml version=\"1.0\" encoding=\"ASCII\"?> <BOOK_FLIGHT:Request xmlns:BOOK_FLIGHT=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\" CUSTNAME=\"James Legrand\" PASSFORM=\"Mr\" PASSNAME=\"Travelin Joe\" PASSBIRTH=\"1990-03-17T00:00:00.000-0500\" FLIGHTDATE=\"2014-03-19T00:00:00.000-0400\" TRAVELAGENCYNUMBER=\"00000110\" DESTINATION_FROM=\"SFO\" DESTINATION_TO=\"FRA\"/>",
"yyyy-MM-dd'T'HH:mm:ss.SSSZ",
"DEPDATE=\"2014-03-19T00:00:00.000-0400\"",
"DEPTIME=\"1970-01-01T16:00:00.000-0500\"",
"http://sap.fusesource.org/idoc/ repositoryName / idocType / idocTypeExtension / systemRelease / applicationRelease",
"http://sap.fusesource.org/idoc/MY_REPO/FLCUSTOMER_CREATEFROMDATA01///",
"<convertBodyTo type=\"java.lang.String\"/>",
"<convertBodyTo type=\"org.fusesource.camel.component.sap.model.idoc.Document\"/>",
"<?xml version=\"1.0\" encoding=\"ASCII\"?> <idoc:Document xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:FLCUSTOMER_CREATEFROMDATA01---=\"http://sap.fusesource.org/idoc/XXX/FLCUSTOMER_CREATEFROMDATA01///\" xmlns:idoc=\"http://sap.fusesource.org/idoc\" creationDate=\"2015-01-28T12:39:13.980-0500\" creationTime=\"2015-01-28T12:39:13.980-0500\" iDocType=\"FLCUSTOMER_CREATEFROMDATA01\" iDocTypeExtension=\"\" messageType=\"FLCUSTOMER_CREATEFROMDATA\" recipientPartnerNumber=\"QUICKCLNT\" recipientPartnerType=\"LS\" senderPartnerNumber=\"QUICKSTART\" senderPartnerType=\"LS\"> <rootSegment xsi:type=\"FLCUSTOMER_CREATEFROMDATA01---:ROOT\" document=\"/\"> <segmentChildren parent=\"//@rootSegment\"> <E1SCU_CRE parent=\"//@rootSegment\" document=\"/\"> <segmentChildren parent=\"//@rootSegment/@segmentChildren/@E1SCU_CRE.0\"> <E1BPSCUNEW parent=\"//@rootSegment/@segmentChildren/@E1SCU_CRE.0\" document=\"/\" CUSTNAME=\"Fred Flintstone\" FORM=\"Mr.\" STREET=\"123 Rubble Lane\" POSTCODE=\"01234\" CITY=\"Bedrock\" COUNTR=\"US\" PHONE=\"800-555-1212\" EMAIL=\"[email protected]\" CUSTTYPE=\"P\" DISCOUNT=\"005\" LANGU=\"E\"/> </segmentChildren> </E1SCU_CRE> </segmentChildren> </rootSegment> </idoc:Document>",
"from(\"direct:getFlightCustomerInfo\") .to(\"bean:createFlightCustomerGetListRequest\") .to(\"sap-srfc-destination:nplDest:BAPI_FLCUST_GETLIST\") .to(\"bean:returnFlightCustomerInfo\");",
"<route> <from uri=\"direct:getFlightCustomerInfo\"/> <to uri=\"bean:createFlightCustomerGetListRequest\"/> <to uri=\"sap-srfc-destination:nplDest:BAPI_FLCUST_GETLIST\"/> <to uri=\"bean:returnFlightCustomerInfo\"/> </route>",
"public void create(Exchange exchange) throws Exception { // Get SAP Endpoint to be called from context. SapSynchronousRfcDestinationEndpoint endpoint = exchange.getContext().getEndpoint(\"sap-srfc-destination:nplDest:BAPI_FLCUST_GETLIST\", SapSynchronousRfcDestinationEndpoint.class); // Retrieve bean from message containing Flight Customer name to // look up. BookFlightRequest bookFlightRequest = exchange.getIn().getBody(BookFlightRequest.class); // Create SAP Request object from target endpoint. Structure request = endpoint.getRequest(); // Add Customer Name to request if set if (bookFlightRequest.getCustomerName() != null && bookFlightRequest.getCustomerName().length() > 0) { request.put(\"CUSTOMER_NAME\", bookFlightRequest.getCustomerName()); } } else { throw new Exception(\"No Customer Name\"); } // Put request object into body of exchange message. exchange.getIn().setBody(request); }",
"public void createFlightCustomerInfo(Exchange exchange) throws Exception { // Retrieve SAP response object from body of exchange message. Structure flightCustomerGetListResponse = exchange.getIn().getBody(Structure.class); if (flightCustomerGetListResponse == null) { throw new Exception(\"No Flight Customer Get List Response\"); } // Check BAPI return parameter for errors @SuppressWarnings(\"unchecked\") Table<Structure> bapiReturn = flightCustomerGetListResponse.get(\"RETURN\", Table.class); Structure bapiReturnEntry = bapiReturn.get(0); if (bapiReturnEntry.get(\"TYPE\", String.class) != \"S\") { String message = bapiReturnEntry.get(\"MESSAGE\", String.class); throw new Exception(\"BAPI call failed: \" + message); } // Get customer list table from response object. @SuppressWarnings(\"unchecked\") Table<? extends Structure> customerList = flightCustomerGetListResponse.get(\"CUSTOMER_LIST\", Table.class); if (customerList == null || customerList.size() == 0) { throw new Exception(\"No Customer Info.\"); } // Get Flight Customer data from first row of table. Structure customer = customerList.get(0); // Create bean to hold Flight Customer data. FlightCustomerInfo flightCustomerInfo = new FlightCustomerInfo(); // Get customer id from Flight Customer data and add to bean. String customerId = customer.get(\"CUSTOMERID\", String.class); if (customerId != null) { flightCustomerInfo.setCustomerNumber(customerId); } // Put bean into body of exchange message. exchange.getIn().setHeader(\"flightCustomerInfo\", flightCustomerInfo); }",
"from(\"direct:createFlightTrip\") .to(\"bean:createFlightTripRequest\") .to(\"sap-srfc-destination:nplDest:BAPI_FLTRIP_CREATE?transacted=true\") .to(\"bean:returnFlightTripResponse\");",
"<route> <from uri=\"direct:createFlightTrip\"/> <to uri=\"bean:createFlightTripRequest\"/> <to uri=\"sap-srfc-destination:nplDest:BAPI_FLTRIP_CREATE?transacted=true\"/> <to uri=\"bean:returnFlightTripResponse\"/> </route>",
"DataFormat jaxb = new JaxbDataFormat(\"org.fusesource.sap.example.jaxb\"); from(\"sap-srfc-server:nplserver:BOOK_FLIGHT\") .unmarshal(jaxb) .multicast() .to(\"direct:getFlightConnectionInfo\", \"direct:getFlightCustomerInfo\", \"direct:getPassengerInfo\") .end() .to(\"direct:createFlightTrip\") .marshal(jaxb);",
"<route> <from uri=\"sap-srfc-server:nplserver:BOOK_FLIGHT\"/> <unmarshal> <jaxb contextPath=\"org.fusesource.sap.example.jaxb\"/> </unmarshal> <multicast> <to uri=\"direct:getFlightConnectionInfo\"/> <to uri=\"direct:getFlightCustomerInfo\"/> <to uri=\"direct:getPassengerInfo\"/> </multicast> <to uri=\"direct:createFlightTrip\"/> <marshal> <jaxb contextPath=\"org.fusesource.sap.example.jaxb\"/> </marshal> </route>",
"@XmlRootElement(name=\"Request\", namespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\") @XmlAccessorType(XmlAccessType.FIELD) public class BookFlightRequest { @XmlAttribute(name=\"CUSTNAME\") private String customerName; @XmlAttribute(name=\"FLIGHTDATE\") @XmlJavaTypeAdapter(DateAdapter.class) private Date flightDate; @XmlAttribute(name=\"TRAVELAGENCYNUMBER\") private String travelAgencyNumber; @XmlAttribute(name=\"DESTINATION_FROM\") private String startAirportCode; @XmlAttribute(name=\"DESTINATION_TO\") private String endAirportCode; @XmlAttribute(name=\"PASSFORM\") private String passengerFormOfAddress; @XmlAttribute(name=\"PASSNAME\") private String passengerName; @XmlAttribute(name=\"PASSBIRTH\") @XmlJavaTypeAdapter(DateAdapter.class) private Date passengerDateOfBirth; @XmlAttribute(name=\"CLASS\") private String flightClass; }",
"@XmlRootElement(name=\"Response\", namespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\") @XmlAccessorType(XmlAccessType.FIELD) public class BookFlightResponse { @XmlAttribute(name=\"TRIPNUMBER\") private String tripNumber; @XmlAttribute(name=\"TICKET_PRICE\") private BigDecimal ticketPrice; @XmlAttribute(name=\"TICKET_TAX\") private BigDecimal ticketTax; @XmlAttribute(name=\"CURRENCY\") private String currency; @XmlAttribute(name=\"PASSFORM\") private String passengerFormOfAddress; @XmlAttribute(name=\"PASSNAME\") private String passengerName; @XmlAttribute(name=\"PASSBIRTH\") @XmlJavaTypeAdapter(DateAdapter.class) private Date passengerDateOfBirth; @XmlElement(name=\"FLTINFO\") private FlightInfo flightInfo; @XmlElement(name=\"CONNINFO\") private ConnectionInfoTable connectionInfo; }",
"@XmlRootElement(name=\"FLTINFO\", namespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\") @XmlAccessorType(XmlAccessType.FIELD) public class FlightInfo { @XmlAttribute(name=\"FLIGHTTIME\") private String flightTime; @XmlAttribute(name=\"CITYFROM\") private String cityFrom; @XmlAttribute(name=\"DEPDATE\") @XmlJavaTypeAdapter(DateAdapter.class) private Date departureDate; @XmlAttribute(name=\"DEPTIME\") @XmlJavaTypeAdapter(DateAdapter.class) private Date departureTime; @XmlAttribute(name=\"CITYTO\") private String cityTo; @XmlAttribute(name=\"ARRDATE\") @XmlJavaTypeAdapter(DateAdapter.class) private Date arrivalDate; @XmlAttribute(name=\"ARRTIME\") @XmlJavaTypeAdapter(DateAdapter.class) private Date arrivalTime; }",
"@XmlRootElement(name=\"CONNINFO_TABLE\", namespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\") @XmlAccessorType(XmlAccessType.FIELD) public class ConnectionInfoTable { @XmlElement(name=\"row\") List<ConnectionInfo> rows; }",
"@XmlRootElement(name=\"CONNINFO\", namespace=\"http://sap.fusesource.org/rfc/nplServer/BOOK_FLIGHT\") @XmlAccessorType(XmlAccessType.FIELD) public class ConnectionInfo { @XmlAttribute(name=\"CONNID\") String connectionId; @XmlAttribute(name=\"AIRLINE\") String airline; @XmlAttribute(name=\"PLANETYPE\") String planeType; @XmlAttribute(name=\"CITYFROM\") String cityFrom; @XmlAttribute(name=\"DEPDATE\") @XmlJavaTypeAdapter(DateAdapter.class) Date departureDate; @XmlAttribute(name=\"DEPTIME\") @XmlJavaTypeAdapter(DateAdapter.class) Date departureTime; @XmlAttribute(name=\"CITYTO\") String cityTo; @XmlAttribute(name=\"ARRDATE\") @XmlJavaTypeAdapter(DateAdapter.class) Date arrivalDate; @XmlAttribute(name=\"ARRTIME\") @XmlJavaTypeAdapter(DateAdapter.class) Date arrivalTime; }"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-sap-component-starter |
25.7. Generating a Certificate Request to Send to a CA | 25.7. Generating a Certificate Request to Send to a CA Once you have created a key, the step is to generate a certificate request which you need to send to the CA of your choice. Make sure you are in the /usr/share/ssl/certs/ directory, and type the following command: Your system displays the following output and asks you for your passphrase (unless you disabled the passphrase option): Type in the passphrase that you chose when you were generating your key unless you don't need to. , your system displays some instructions and then ask for a series of responses from you. Your inputs are incorporated into the certificate request. The display, with example responses, looks similar to the following: The default answers appear in brackets ( [] ) immediately after each request for input. For example, the first information required is the name of the country where the certificate is to be used, shown like the following: The default input, in brackets, is GB . Accept the default by pressing Enter or fill in your country's two letter code. You have to type in the rest of the values. All of these should be self-explanatory, but you must follow these guidelines: Do not abbreviate the locality or state. Write them out (for example, St. Louis should be written out as Saint Louis). If you are sending this CSR to a CA, be very careful to provide correct information for all of the fields, but especially for the Organization Name and the Common Name . CAs check the information provided in the CSR to determine whether your organization is responsible for what you provided as the Common Name . CAs rejects CSRs which include information they perceive as invalid. For Common Name , make sure you type in the real name of your secure server (a valid DNS name) and not any aliases which the server may have. The Email Address should be the email address for the webmaster or system administrator. Avoid special characters like @, #, & !, and etc. Some CAs reject a certificate request which contains a special character. If your company name includes an ampersand (&), spell it out as "and" instead of "&." Do not use either of the extra attributes ( A challenge password and An optional company name ). To continue without entering these fields, just press Enter to accept the blank default for both inputs. The file /etc/httpd/conf/ssl.csr/server.csr is created when you have finished entering your information. This file is your certificate request, ready to send to your CA. After you have decided on a CA, follow the instructions they provide on their website. Their instructions tell you how to send your certificate request, any other documentation that they require, and your payment to them. After you have fulfilled the CA's requirements, they send a certificate to you (usually by email). Save (or cut and paste) the certificate that they send you as /etc/httpd/conf/ssl.crt/server.crt . Be sure to keep a backup of this file. | [
"make certreq",
"umask 77 ; /usr/bin/openssl req -new -key -set_serial num /etc/httpd/conf/ssl.key/server.key -out /etc/httpd/conf/ssl.csr/server.csr Using configuration from /usr/share/ssl/openssl.cnf Enter pass phrase:",
"You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [GB]: US State or Province Name (full name) [Berkshire]: North Carolina Locality Name (eg, city) [Newbury]: Raleigh Organization Name (eg, company) [My Company Ltd]: Test Company Organizational Unit Name (eg, section) []: Testing Common Name (your name or server's hostname) []: test.example.com Email Address []: [email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []:",
"Country Name (2 letter code) [GB]:"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Apache_HTTP_Secure_Server_Configuration-Generating_a_Certificate_Request_to_Send_to_a_CA |
1.2. JBoss Data Virtualization and JDBC | 1.2. JBoss Data Virtualization and JDBC Red Hat JBoss Data Virtualization provides an API that builds on Java Database Connectivity (JDBC), allowing client applications to issue SQL queries against deployed virtual databases (VDBs). Note Your client applications must use Java JDK 1.6 or higher to connect to Red Hat JBoss Data Virtualization VDBs. Note The JBoss Data Virtualization JDBC API is compatible with the JDBC 4.0 specification but does not fully support all methods. Advanced features, such as updatable result sets and SQL3 data types are also not supported. See Section A.3, "Unsupported Classes and Methods in java.sql" and Section A.4, "Unsupported Classes and Methods in javax.sql" for more information about unsupported classes and methods. Important Support for earlier versions of JDK has been deprecated. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/jboss_data_virtualization_and_jdbc |
Lightspeed | Lightspeed OpenShift Container Platform 4.15 About Lightspeed Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/lightspeed/index |
Chapter 62. Salesforce Delete Sink | Chapter 62. Salesforce Delete Sink Removes an object from Salesforce. The body received must be a JSON containing two keys: sObjectId and sObjectName. Example body: { "sObjectId": "XXXXX0", "sObjectName": "Contact" } 62.1. Configuration Options The following table summarizes the configuration options available for the salesforce-delete-sink Kamelet: Property Name Description Type Default Example clientId * Consumer Key The Salesforce application consumer key string clientSecret * Consumer Secret The Salesforce application consumer secret string password * Password The Salesforce user password string userName * Username The Salesforce username string loginUrl Login URL The Salesforce instance login URL string "https://login.salesforce.com" Note Fields marked with an asterisk (*) are mandatory. 62.2. Dependencies At runtime, the salesforce-delete-sink Kamelet relies upon the presence of the following dependencies: camel:salesforce camel:kamelet camel:core camel:jsonpath 62.3. Usage This section describes how you can use the salesforce-delete-sink . 62.3.1. Knative Sink You can use the salesforce-delete-sink Kamelet as a Knative sink by binding it to a Knative object. salesforce-delete-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-delete-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-delete-sink properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" userName: "The Username" 62.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 62.3.1.2. Procedure for using the cluster CLI Save the salesforce-delete-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f salesforce-delete-sink-binding.yaml 62.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel salesforce-delete-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username" This command creates the KameletBinding in the current namespace on the cluster. 62.3.2. Kafka Sink You can use the salesforce-delete-sink Kamelet as a Kafka sink by binding it to a Kafka topic. salesforce-delete-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-delete-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-delete-sink properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" userName: "The Username" 62.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 62.3.2.2. Procedure for using the cluster CLI Save the salesforce-delete-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f salesforce-delete-sink-binding.yaml 62.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-delete-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username" This command creates the KameletBinding in the current namespace on the cluster. 62.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/salesforce-delete-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-delete-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-delete-sink properties: clientId: \"The Consumer Key\" clientSecret: \"The Consumer Secret\" password: \"The Password\" userName: \"The Username\"",
"apply -f salesforce-delete-sink-binding.yaml",
"kamel bind channel:mychannel salesforce-delete-sink -p \"sink.clientId=The Consumer Key\" -p \"sink.clientSecret=The Consumer Secret\" -p \"sink.password=The Password\" -p \"sink.userName=The Username\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-delete-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-delete-sink properties: clientId: \"The Consumer Key\" clientSecret: \"The Consumer Secret\" password: \"The Password\" userName: \"The Username\"",
"apply -f salesforce-delete-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-delete-sink -p \"sink.clientId=The Consumer Key\" -p \"sink.clientSecret=The Consumer Secret\" -p \"sink.password=The Password\" -p \"sink.userName=The Username\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/salesforce-sink-delete |
Chapter 13. Ingress [networking.k8s.io/v1] | Chapter 13. Ingress [networking.k8s.io/v1] Description Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend. An Ingress can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc. Type object 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object IngressSpec describes the Ingress the user wishes to exist. status object IngressStatus describe the current state of the Ingress. 13.1.1. .spec Description IngressSpec describes the Ingress the user wishes to exist. Type object Property Type Description defaultBackend object IngressBackend describes all endpoints for a given service and port. ingressClassName string ingressClassName is the name of an IngressClass cluster resource. Ingress controller implementations use this field to know whether they should be serving this Ingress resource, by a transitive connection (controller IngressClass Ingress resource). Although the kubernetes.io/ingress.class annotation (simple constant name) was never formally defined, it was widely supported by Ingress controllers to create a direct binding between Ingress controller and Ingress resources. Newly created Ingress resources should prefer using the field. However, even though the annotation is officially deprecated, for backwards compatibility reasons, ingress controllers should still honor that annotation if present. rules array rules is a list of host rules used to configure the Ingress. If unspecified, or no rule matches, all traffic is sent to the default backend. rules[] object IngressRule represents the rules mapping the paths under a specified host to the related backend services. Incoming requests are first evaluated for a host match, then routed to the backend associated with the matching IngressRuleValue. tls array tls represents the TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI. tls[] object IngressTLS describes the transport layer security associated with an ingress. 13.1.2. .spec.defaultBackend Description IngressBackend describes all endpoints for a given service and port. Type object Property Type Description resource TypedLocalObjectReference resource is an ObjectRef to another Kubernetes resource in the namespace of the Ingress object. If resource is specified, a service.Name and service.Port must not be specified. This is a mutually exclusive setting with "Service". service object IngressServiceBackend references a Kubernetes Service as a Backend. 13.1.3. .spec.defaultBackend.service Description IngressServiceBackend references a Kubernetes Service as a Backend. Type object Required name Property Type Description name string name is the referenced service. The service must exist in the same namespace as the Ingress object. port object ServiceBackendPort is the service port being referenced. 13.1.4. .spec.defaultBackend.service.port Description ServiceBackendPort is the service port being referenced. Type object Property Type Description name string name is the name of the port on the Service. This is a mutually exclusive setting with "Number". number integer number is the numerical port number (e.g. 80) on the Service. This is a mutually exclusive setting with "Name". 13.1.5. .spec.rules Description rules is a list of host rules used to configure the Ingress. If unspecified, or no rule matches, all traffic is sent to the default backend. Type array 13.1.6. .spec.rules[] Description IngressRule represents the rules mapping the paths under a specified host to the related backend services. Incoming requests are first evaluated for a host match, then routed to the backend associated with the matching IngressRuleValue. Type object Property Type Description host string host is the fully qualified domain name of a network host, as defined by RFC 3986. Note the following deviations from the "host" part of the URI as defined in RFC 3986: 1. IPs are not allowed. Currently an IngressRuleValue can only apply to the IP in the Spec of the parent Ingress. 2. The : delimiter is not respected because ports are not allowed. Currently the port of an Ingress is implicitly :80 for http and :443 for https. Both these may change in the future. Incoming requests are matched against the host before the IngressRuleValue. If the host is unspecified, the Ingress routes all traffic based on the specified IngressRuleValue. host can be "precise" which is a domain name without the terminating dot of a network host (e.g. "foo.bar.com") or "wildcard", which is a domain name prefixed with a single wildcard label (e.g. " .foo.com"). The wildcard character ' ' must appear by itself as the first DNS label and matches only a single label. You cannot have a wildcard label by itself (e.g. Host == "*"). Requests will be matched against the Host field in the following way: 1. If host is precise, the request matches this rule if the http host header is equal to Host. 2. If host is a wildcard, then the request matches this rule if the http host header is to equal to the suffix (removing the first label) of the wildcard rule. http object HTTPIngressRuleValue is a list of http selectors pointing to backends. In the example: http://<host>/<path>?<searchpart> backend where where parts of the url correspond to RFC 3986, this resource will be used to match against everything after the last '/' and before the first '?' or '#'. 13.1.7. .spec.rules[].http Description HTTPIngressRuleValue is a list of http selectors pointing to backends. In the example: http://<host>/<path>?<searchpart> backend where where parts of the url correspond to RFC 3986, this resource will be used to match against everything after the last '/' and before the first '?' or '#'. Type object Required paths Property Type Description paths array paths is a collection of paths that map requests to backends. paths[] object HTTPIngressPath associates a path with a backend. Incoming urls matching the path are forwarded to the backend. 13.1.8. .spec.rules[].http.paths Description paths is a collection of paths that map requests to backends. Type array 13.1.9. .spec.rules[].http.paths[] Description HTTPIngressPath associates a path with a backend. Incoming urls matching the path are forwarded to the backend. Type object Required pathType backend Property Type Description backend object IngressBackend describes all endpoints for a given service and port. path string path is matched against the path of an incoming request. Currently it can contain characters disallowed from the conventional "path" part of a URL as defined by RFC 3986. Paths must begin with a '/' and must be present when using PathType with value "Exact" or "Prefix". pathType string pathType determines the interpretation of the path matching. PathType can be one of the following values: * Exact: Matches the URL path exactly. * Prefix: Matches based on a URL path prefix split by '/'. Matching is done on a path element by element basis. A path element refers is the list of labels in the path split by the '/' separator. A request is a match for path p if every p is an element-wise prefix of p of the request path. Note that if the last element of the path is a substring of the last element in request path, it is not a match (e.g. /foo/bar matches /foo/bar/baz, but does not match /foo/barbaz). * ImplementationSpecific: Interpretation of the Path matching is up to the IngressClass. Implementations can treat this as a separate PathType or treat it identically to Prefix or Exact path types. Implementations are required to support all path types. Possible enum values: - "Exact" matches the URL path exactly and with case sensitivity. - "ImplementationSpecific" matching is up to the IngressClass. Implementations can treat this as a separate PathType or treat it identically to Prefix or Exact path types. - "Prefix" matches based on a URL path prefix split by '/'. Matching is case sensitive and done on a path element by element basis. A path element refers to the list of labels in the path split by the '/' separator. A request is a match for path p if every p is an element-wise prefix of p of the request path. Note that if the last element of the path is a substring of the last element in request path, it is not a match (e.g. /foo/bar matches /foo/bar/baz, but does not match /foo/barbaz). If multiple matching paths exist in an Ingress spec, the longest matching path is given priority. Examples: - /foo/bar does not match requests to /foo/barbaz - /foo/bar matches request to /foo/bar and /foo/bar/baz - /foo and /foo/ both match requests to /foo and /foo/. If both paths are present in an Ingress spec, the longest matching path (/foo/) is given priority. 13.1.10. .spec.rules[].http.paths[].backend Description IngressBackend describes all endpoints for a given service and port. Type object Property Type Description resource TypedLocalObjectReference resource is an ObjectRef to another Kubernetes resource in the namespace of the Ingress object. If resource is specified, a service.Name and service.Port must not be specified. This is a mutually exclusive setting with "Service". service object IngressServiceBackend references a Kubernetes Service as a Backend. 13.1.11. .spec.rules[].http.paths[].backend.service Description IngressServiceBackend references a Kubernetes Service as a Backend. Type object Required name Property Type Description name string name is the referenced service. The service must exist in the same namespace as the Ingress object. port object ServiceBackendPort is the service port being referenced. 13.1.12. .spec.rules[].http.paths[].backend.service.port Description ServiceBackendPort is the service port being referenced. Type object Property Type Description name string name is the name of the port on the Service. This is a mutually exclusive setting with "Number". number integer number is the numerical port number (e.g. 80) on the Service. This is a mutually exclusive setting with "Name". 13.1.13. .spec.tls Description tls represents the TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI. Type array 13.1.14. .spec.tls[] Description IngressTLS describes the transport layer security associated with an ingress. Type object Property Type Description hosts array (string) hosts is a list of hosts included in the TLS certificate. The values in this list must match the name/s used in the tlsSecret. Defaults to the wildcard host setting for the loadbalancer controller fulfilling this Ingress, if left unspecified. secretName string secretName is the name of the secret used to terminate TLS traffic on port 443. Field is left optional to allow TLS routing based on SNI hostname alone. If the SNI host in a listener conflicts with the "Host" header field used by an IngressRule, the SNI host is used for termination and value of the "Host" header is used for routing. 13.1.15. .status Description IngressStatus describe the current state of the Ingress. Type object Property Type Description loadBalancer object IngressLoadBalancerStatus represents the status of a load-balancer. 13.1.16. .status.loadBalancer Description IngressLoadBalancerStatus represents the status of a load-balancer. Type object Property Type Description ingress array ingress is a list containing ingress points for the load-balancer. ingress[] object IngressLoadBalancerIngress represents the status of a load-balancer ingress point. 13.1.17. .status.loadBalancer.ingress Description ingress is a list containing ingress points for the load-balancer. Type array 13.1.18. .status.loadBalancer.ingress[] Description IngressLoadBalancerIngress represents the status of a load-balancer ingress point. Type object Property Type Description hostname string hostname is set for load-balancer ingress points that are DNS based. ip string ip is set for load-balancer ingress points that are IP based. ports array ports provides information about the ports exposed by this LoadBalancer. ports[] object IngressPortStatus represents the error condition of a service port 13.1.19. .status.loadBalancer.ingress[].ports Description ports provides information about the ports exposed by this LoadBalancer. Type array 13.1.20. .status.loadBalancer.ingress[].ports[] Description IngressPortStatus represents the error condition of a service port Type object Required port protocol Property Type Description error string error is to record the problem with the service port The format of the error shall comply with the following rules: - built-in error values shall be specified in this file and those shall use CamelCase names - cloud provider specific error values must have names that comply with the format foo.example.com/CamelCase. port integer port is the port number of the ingress port. protocol string protocol is the protocol of the ingress port. The supported values are: "TCP", "UDP", "SCTP" Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 13.2. API endpoints The following API endpoints are available: /apis/networking.k8s.io/v1/ingresses GET : list or watch objects of kind Ingress /apis/networking.k8s.io/v1/watch/ingresses GET : watch individual changes to a list of Ingress. deprecated: use the 'watch' parameter with a list operation instead. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses DELETE : delete collection of Ingress GET : list or watch objects of kind Ingress POST : create an Ingress /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/ingresses GET : watch individual changes to a list of Ingress. deprecated: use the 'watch' parameter with a list operation instead. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name} DELETE : delete an Ingress GET : read the specified Ingress PATCH : partially update the specified Ingress PUT : replace the specified Ingress /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/ingresses/{name} GET : watch changes to an object of kind Ingress. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name}/status GET : read status of the specified Ingress PATCH : partially update status of the specified Ingress PUT : replace status of the specified Ingress 13.2.1. /apis/networking.k8s.io/v1/ingresses HTTP method GET Description list or watch objects of kind Ingress Table 13.1. HTTP responses HTTP code Reponse body 200 - OK IngressList schema 401 - Unauthorized Empty 13.2.2. /apis/networking.k8s.io/v1/watch/ingresses HTTP method GET Description watch individual changes to a list of Ingress. deprecated: use the 'watch' parameter with a list operation instead. Table 13.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.3. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses HTTP method DELETE Description delete collection of Ingress Table 13.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 13.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Ingress Table 13.5. HTTP responses HTTP code Reponse body 200 - OK IngressList schema 401 - Unauthorized Empty HTTP method POST Description create an Ingress Table 13.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.7. Body parameters Parameter Type Description body Ingress schema Table 13.8. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 202 - Accepted Ingress schema 401 - Unauthorized Empty 13.2.4. /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/ingresses HTTP method GET Description watch individual changes to a list of Ingress. deprecated: use the 'watch' parameter with a list operation instead. Table 13.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.5. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name} Table 13.10. Global path parameters Parameter Type Description name string name of the Ingress HTTP method DELETE Description delete an Ingress Table 13.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 13.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Ingress Table 13.13. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Ingress Table 13.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.15. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Ingress Table 13.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.17. Body parameters Parameter Type Description body Ingress schema Table 13.18. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 401 - Unauthorized Empty 13.2.6. /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/ingresses/{name} Table 13.19. Global path parameters Parameter Type Description name string name of the Ingress HTTP method GET Description watch changes to an object of kind Ingress. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 13.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.7. /apis/networking.k8s.io/v1/namespaces/{namespace}/ingresses/{name}/status Table 13.21. Global path parameters Parameter Type Description name string name of the Ingress HTTP method GET Description read status of the specified Ingress Table 13.22. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Ingress Table 13.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.24. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Ingress Table 13.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.26. Body parameters Parameter Type Description body Ingress schema Table 13.27. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/network_apis/ingress-networking-k8s-io-v1 |
7.7. Downgrading SSSD | 7.7. Downgrading SSSD When downgrading - either downgrading the version of SSSD or downgrading the operating system itself - then the existing SSSD cache needs to be removed. If the cache is not removed, then SSSD process is dead but a PID file remains. The SSSD logs show that it cannot connect to any of its associated domains because the cache version is unrecognized. Users are then no longer recognized and are unable to authenticate to domain services and hosts. After downgrading the SSSD version: Delete the existing cache database files. Restart the SSSD process. | [
"(Wed Nov 28 21:25:50 2012) [sssd] [sysdb_domain_init_internal] (0x0010): Unknown DB version [0.14], expected [0.10] for domain AD!",
"rm -rf /var/lib/sss/db/*",
"systemctl restart sssd.service"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/sssd-downgrade |
Chapter 3. Getting support | Chapter 3. Getting support 3.1. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 3.2. About the Red Hat Knowledgebase The Red Hat Knowledgebase provides rich content aimed at helping you make the most of Red Hat's products and technologies. The Red Hat Knowledgebase consists of articles, product documentation, and videos outlining best practices on installing, configuring, and using Red Hat products. In addition, you can search for solutions to known issues, each providing concise root cause descriptions and remedial steps. 3.3. Searching the Red Hat Knowledgebase In the event of an OpenShift Container Platform issue, you can perform an initial search to determine if a solution already exists within the Red Hat Knowledgebase. Prerequisites You have a Red Hat Customer Portal account. Procedure Log in to the Red Hat Customer Portal . Click Search . In the search field, input keywords and strings relating to the problem, including: OpenShift Container Platform components (such as etcd ) Related procedure (such as installation ) Warnings, error messages, and other outputs related to explicit failures Click the Enter key. Optional: Select the OpenShift Container Platform product filter. Optional: Select the Documentation content type filter. 3.4. Submitting a support case Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have a Red Hat Customer Portal account. You have a Red Hat Standard or Premium subscription. Procedure Log in to the Customer Support page of the Red Hat Customer Portal. Click Get support . On the Cases tab of the Customer Support page: Optional: Change the pre-filled account and owner details if needed. Select the appropriate category for your issue, such as Bug or Defect , and click Continue . Enter the following information: In the Summary field, enter a concise but descriptive problem summary and further details about the symptoms being experienced, as well as your expectations. Select OpenShift Container Platform from the Product drop-down menu. Select 4.15 from the Version drop-down. Review the list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. If the suggested articles do not address the issue, click Continue . Review the updated list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. The list is refined as you provide more information during the case creation process. If the suggested articles do not address the issue, click Continue . Ensure that the account information presented is as expected, and if not, amend accordingly. Check that the autofilled OpenShift Container Platform Cluster ID is correct. If it is not, manually obtain your cluster ID. To manually obtain your cluster ID using the OpenShift Container Platform web console: Navigate to Home Overview . Find the value in the Cluster ID field of the Details section. Alternatively, it is possible to open a new support case through the OpenShift Container Platform web console and have your cluster ID autofilled. From the toolbar, navigate to (?) Help Open Support Case . The Cluster ID value is autofilled. To obtain your cluster ID using the OpenShift CLI ( oc ), run the following command: USD oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}' Complete the following questions where prompted and then click Continue : What are you experiencing? What are you expecting to happen? Define the value or impact to you or the business. Where are you experiencing this behavior? What environment? When does this behavior occur? Frequency? Repeatedly? At certain times? Upload relevant diagnostic data files and click Continue . It is recommended to include data gathered using the oc adm must-gather command as a starting point, plus any issue specific data that is not collected by that command. Input relevant case management details and click Continue . Preview the case details and click Submit . 3.5. Additional resources For details about identifying issues with your cluster, see Using Insights to identify issues with your cluster . | [
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/support/getting-support |
Custom Tekton Hub instance | Custom Tekton Hub instance Red Hat OpenShift Pipelines 1.15 Installing a custom instance of Tekton Hub Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/custom_tekton_hub_instance/index |
Chapter 233. Mustache Component | Chapter 233. Mustache Component Available as of Camel version 2.12 The mustache: component allows for processing a message using a Mustache template. This can be ideal when using Templating to generate responses for requests. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-mustache</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 233.1. URI format mustache:templateName[?options] Where templateName is the classpath-local URI of the template to invoke; or the complete URL of the remote template (eg: file://folder/myfile.mustache ). You can append query options to the URI in the following format, ?option=value&option=value&... 233.2. Options The Mustache component supports 4 options, which are listed below. Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so imposes a potential security risk as this opens access to the full power of CamelContext API. false boolean allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this option has security ramifications. For example, if the header contains untrusted or user derived content, this can ultimately impact on the confidentility and integrity of your end application, so use this option with caution. false boolean mustacheFactory (advanced) To use a custom MustacheFactory MustacheFactory resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Mustache endpoint is configured using URI syntax: with the following path and query parameters: 233.2.1. Path Parameters (1 parameters): Name Description Default Type resourceUri Required Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod. String 233.2.2. Query Parameters (7 parameters): Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so imposes a potential security risk as this opens access to the full power of CamelContext API. false boolean allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this option has security ramifications. For example, if the header contains untrusted or user derived content, this can ultimately impact on the confidentility and integrity of your end application, so use this option with caution. false boolean contentCache (producer) Sets whether to use resource content cache or not false boolean encoding (producer) Character encoding of the resource content. String endDelimiter (producer) Characters used to mark template code end. }} String startDelimiter (producer) Characters used to mark template code beginning. {{ String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 233.3. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.mustache.enabled Enable mustache component true Boolean camel.component.mustache.mustache-factory To use a custom MustacheFactory. The option is a com.github.mustachejava.MustacheFactory type. String camel.component.mustache.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 233.4. Mustache Context Camel will provide exchange information in the Mustache context (just a Map ). The Exchange is transferred as: key value exchange The Exchange itself. exchange.properties The Exchange properties. headers The headers of the In message. camelContext The Camel Context. request The In message. body The In message body. response The Out message (only for InOut message exchange pattern). 233.5. Dynamic templates Camel provides two headers by which you can define a different resource location for a template or the template content itself. If any of these headers is set then Camel uses this over the endpoint configured resource. This allows you to provide a dynamic template at runtime. Header Type Description Support Version MustacheConstants.MUSTACHE_RESOURCE_URI String A URI for the template resource to use instead of the endpoint configured. MustacheConstants.MUSTACHE_TEMPLATE String The template to use instead of the endpoint configured. 233.6. Samples For example you could use something like: from("activemq:My.Queue"). to("mustache:com/acme/MyResponse.mustache"); To use a Mustache template to formulate a response for a message for InOut message exchanges (where there is a JMSReplyTo header). If you want to use InOnly and consume the message and send it to another destination you could use: from("activemq:My.Queue"). to("mustache:com/acme/MyResponse.mustache"). to("activemq:Another.Queue"); It's possible to specify what template the component should use dynamically via a header, so for example: from("direct:in"). setHeader(MustacheConstants.MUSTACHE_RESOURCE_URI).constant("path/to/my/template.mustache"). to("mustache:dummy?allowTemplateFromHeader=true"); Warning Enabling the allowTemplateFromHeader option has security ramifications. For example, if the header contains untrusted or user derived content, this can ultimately impact on the confidentility and integrity of your end application, so use this option with caution. 233.7. The Email Sample In this sample we want to use Mustache templating for an order confirmation email. The email template is laid out in Mustache as: Dear {{headers.lastName}}}, {{headers.firstName}} Thanks for the order of {{headers.item}}. Regards Camel Riders Bookstore {{body}} 233.8. See Also Configuring Camel Component Endpoint Getting Started | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-mustache</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"mustache:templateName[?options]",
"mustache:resourceUri",
"from(\"activemq:My.Queue\"). to(\"mustache:com/acme/MyResponse.mustache\");",
"from(\"activemq:My.Queue\"). to(\"mustache:com/acme/MyResponse.mustache\"). to(\"activemq:Another.Queue\");",
"from(\"direct:in\"). setHeader(MustacheConstants.MUSTACHE_RESOURCE_URI).constant(\"path/to/my/template.mustache\"). to(\"mustache:dummy?allowTemplateFromHeader=true\");",
"Dear {{headers.lastName}}}, {{headers.firstName}} Thanks for the order of {{headers.item}}. Regards Camel Riders Bookstore {{body}}"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/mustache-component |
Chapter 1. Support policy | Chapter 1. Support policy Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.402/openjdk8-support-policy |
Chapter 10. Distributed tracing | Chapter 10. Distributed tracing Distributed tracing allows you to track the progress of transactions between applications in a distributed system. In a microservices architecture, tracing tracks the progress of transactions between services. Trace data is useful for monitoring application performance and investigating issues with target systems and end-user applications. In AMQ Streams, tracing facilitates the end-to-end tracking of messages: from source systems to Kafka, and then from Kafka to target systems and applications. It complements the metrics that are available to view in Grafana dashboards , as well as the component loggers. How AMQ Streams supports tracing Support for tracing is built in to the following components: Kafka Connect (including Kafka Connect with Source2Image support) MirrorMaker MirrorMaker 2.0 AMQ Streams Kafka Bridge You enable and configure tracing for these components using template configuration properties in their custom resources. To enable tracing in Kafka producers, consumers, and Kafka Streams API applications, you instrument application code using the OpenTracing Apache Kafka Client Instrumentation library (included with AMQ Streams). When instrumented, clients generate trace data; for example, when producing messages or writing offsets to the log. Traces are sampled according to a sampling strategy and then visualized in the Jaeger user interface. Note Tracing is not supported for Kafka brokers. Setting up tracing for applications and systems beyond AMQ Streams is outside the scope of this chapter. To learn more about this subject, search for "inject and extract" in the OpenTracing documentation . Outline of procedures To set up tracing for AMQ Streams, follow these procedures in order: Set up tracing for clients: Initialize a Jaeger tracer for Kafka clients Instrument clients with tracers: Instrument producers and consumers for tracing Instrument Kafka Streams applications for tracing Set up tracing for MirrorMaker, Kafka Connect, and the Kafka Bridge Prerequisites The Jaeger backend components are deployed to your OpenShift cluster. For deployment instructions, see the Jaeger deployment documentation . 10.1. Overview of OpenTracing and Jaeger AMQ Streams uses the OpenTracing and Jaeger projects. OpenTracing is an API specification that is independent from the tracing or monitoring system. The OpenTracing APIs are used to instrument application code Instrumented applications generate traces for individual transactions across the distributed system Traces are composed of spans that define specific units of work over time Jaeger is a tracing system for microservices-based distributed systems. Jaeger implements the OpenTracing APIs and provides client libraries for instrumentation The Jaeger user interface allows you to query, filter, and analyze trace data Additional resources OpenTracing Jaeger 10.2. Setting up tracing for Kafka clients Initialize a Jaeger tracer to instrument your client applications for distributed tracing. 10.2.1. Initializing a Jaeger tracer for Kafka clients Configure and initialize a Jaeger tracer using a set of tracing environment variables . Procedure In each client application: Add Maven dependencies for Jaeger to the pom.xml file for the client application: <dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.1.0.redhat-00002</version> </dependency> Define the configuration of the Jaeger tracer using the tracing environment variables . Create the Jaeger tracer from the environment variables that you defined in step two: Tracer tracer = Configuration.fromEnv().getTracer(); Note For alternative ways to initialize a Jaeger tracer, see the Java OpenTracing library documentation. Register the Jaeger tracer as a global tracer: GlobalTracer.register(tracer); A Jaeger tracer is now initialized for the client application to use. 10.2.2. Environment variables for tracing Use these environment variables when configuring a Jaeger tracer for Kafka clients. Note The tracing environment variables are part of the Jaeger project and are subject to change. For the latest environment variables, see the Jaeger documentation . Property Required Description JAEGER_SERVICE_NAME Yes The name of the Jaeger tracer service. JAEGER_AGENT_HOST No The hostname for communicating with the jaeger-agent through the User Datagram Protocol (UDP). JAEGER_AGENT_PORT No The port used for communicating with the jaeger-agent through UDP. JAEGER_ENDPOINT No The traces endpoint. Only define this variable if the client application will bypass the jaeger-agent and connect directly to the jaeger-collector . JAEGER_AUTH_TOKEN No The authentication token to send to the endpoint as a bearer token. JAEGER_USER No The username to send to the endpoint if using basic authentication. JAEGER_PASSWORD No The password to send to the endpoint if using basic authentication. JAEGER_PROPAGATION No A comma-separated list of formats to use for propagating the trace context. Defaults to the standard Jaeger format. Valid values are jaeger , b3 , and w3c . JAEGER_REPORTER_LOG_SPANS No Indicates whether the reporter should also log the spans. JAEGER_REPORTER_MAX_QUEUE_SIZE No The reporter's maximum queue size. JAEGER_REPORTER_FLUSH_INTERVAL No The reporter's flush interval, in ms. Defines how frequently the Jaeger reporter flushes span batches. JAEGER_SAMPLER_TYPE No The sampling strategy to use for client traces: Constant Probabilistic Rate Limiting Remote (the default) To sample all traces, use the Constant sampling strategy with a parameter of 1. For more information, see the Jaeger documentation . JAEGER_SAMPLER_PARAM No The sampler parameter (number). JAEGER_SAMPLER_MANAGER_HOST_PORT No The hostname and port to use if a Remote sampling strategy is selected. JAEGER_TAGS No A comma-separated list of tracer-level tags that are added to all reported spans. The value can also refer to an environment variable using the format USD{envVarName:default} . :default is optional and identifies a value to use if the environment variable cannot be found. Additional resources Section 10.2.1, "Initializing a Jaeger tracer for Kafka clients" 10.3. Instrumenting Kafka clients with tracers Instrument Kafka producer and consumer clients, and Kafka Streams API applications for distributed tracing. 10.3.1. Instrumenting producers and consumers for tracing Use a Decorator pattern or Interceptors to instrument your Java producer and consumer application code for tracing. Procedure In the application code of each producer and consumer application: Add the Maven dependency for OpenTracing to the producer or consumer's pom.xml file. <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.15.redhat-00001</version> </dependency> Instrument your client application code using either a Decorator pattern or Interceptors. To use a Decorator pattern: // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer: TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); // Send: tracingProducer.send(...); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); // Subscribe: tracingConsumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); // Retrieve SpanContext from polled record (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); To use Interceptors: // Register the tracer with GlobalTracer: GlobalTracer.register(tracer); // Add the TracingProducerInterceptor to the sender properties: senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Send: producer.send(...); // Add the TracingConsumerInterceptor to the consumer properties: consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Subscribe: consumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = consumer.poll(1000); // Retrieve the SpanContext from a polled message (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); 10.3.1.1. Custom span names in a Decorator pattern A span is a logical unit of work in Jaeger, with an operation name, start time, and duration. To use a Decorator pattern to instrument your producer and consumer applications, define custom span names by passing a BiFunction object as an additional argument when creating the TracingKafkaProducer and TracingKafkaConsumer objects. The OpenTracing Apache Kafka Client Instrumentation library includes several built-in span names. Example: Using custom span names to instrument client application code in a Decorator pattern // Create a BiFunction for the KafkaProducer that operates on (String operationName, ProducerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ProducerRecord, String> producerSpanNameProvider = (operationName, producerRecord) -> "CUSTOM_PRODUCER_NAME"; // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer, producerSpanNameProvider); // Spans created by the tracingProducer will now have "CUSTOM_PRODUCER_NAME" as the span name. // Create a BiFunction for the KafkaConsumer that operates on (String operationName, ConsumerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ConsumerRecord, String> consumerSpanNameProvider = (operationName, consumerRecord) -> operationName.toUpperCase(); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer, passing in the consumerSpanNameProvider BiFunction: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer, consumerSpanNameProvider); // Spans created by the tracingConsumer will have the operation name as the span name, in upper-case. // "receive" -> "RECEIVE" 10.3.1.2. Built-in span names When defining custom span names, you can use the following BiFunctions in the ClientSpanNameProvider class. If no spanNameProvider is specified, CONSUMER_OPERATION_NAME and PRODUCER_OPERATION_NAME are used. BiFunction Description CONSUMER_OPERATION_NAME, PRODUCER_OPERATION_NAME Returns the operationName as the span name: "receive" for consumers and "send" for producers. CONSUMER_PREFIXED_OPERATION_NAME(String prefix), PRODUCER_PREFIXED_OPERATION_NAME(String prefix) Returns a String concatenation of prefix and operationName . CONSUMER_TOPIC, PRODUCER_TOPIC Returns the name of the topic that the message was sent to or retrieved from in the format (record.topic()) . PREFIXED_CONSUMER_TOPIC(String prefix), PREFIXED_PRODUCER_TOPIC(String prefix) Returns a String concatenation of prefix and the topic name in the format (record.topic()) . CONSUMER_OPERATION_NAME_TOPIC, PRODUCER_OPERATION_NAME_TOPIC Returns the operation name and the topic name: "operationName - record.topic()" . CONSUMER_PREFIXED_OPERATION_NAME_TOPIC(String prefix), PRODUCER_PREFIXED_OPERATION_NAME_TOPIC(String prefix) Returns a String concatenation of prefix and "operationName - record.topic()" . 10.3.2. Instrumenting Kafka Streams applications for tracing This section describes how to instrument Kafka Streams API applications for distributed tracing. Procedure In each Kafka Streams API application: Add the opentracing-kafka-streams dependency to the pom.xml file for your Kafka Streams API application: <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-streams</artifactId> <version>0.1.15.redhat-00001</version> </dependency> Create an instance of the TracingKafkaClientSupplier supplier interface: KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); Provide the supplier interface to KafkaStreams : KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start(); 10.4. Setting up tracing for MirrorMaker, Kafka Connect, and the Kafka Bridge Distributed tracing is supported for MirrorMaker, MirrorMaker 2.0, Kafka Connect (including Kafka Connect with Source2Image support), and the AMQ Streams Kafka Bridge. Tracing in MirrorMaker and MirrorMaker 2.0 For MirrorMaker and MirrorMaker 2.0, messages are traced from the source cluster to the target cluster. The trace data records messages entering and leaving the MirrorMaker or MirrorMaker 2.0 component. Tracing in Kafka Connect Only messages produced and consumed by Kafka Connect itself are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems. For more information, see Section 2.2.1, "Configuring Kafka Connect" . Tracing in the Kafka Bridge Messages produced and consumed by the Kafka Bridge are traced. Incoming HTTP requests from client applications to send and receive messages through the Kafka Bridge are also traced. To have end-to-end tracing, you must configure tracing in your HTTP clients. 10.4.1. Enabling tracing in MirrorMaker, Kafka Connect, and Kafka Bridge resources Update the configuration of KafkaMirrorMaker , KafkaMirrorMaker2 , KafkaConnect , KafkaConnectS2I , and KafkaBridge custom resources to specify and configure a Jaeger tracer service for each resource. Updating a tracing-enabled resource in your OpenShift cluster triggers two events: Interceptor classes are updated in the integrated consumers and producers in MirrorMaker, MirrorMaker 2.0, Kafka Connect, or the AMQ Streams Kafka Bridge. For MirrorMaker, MirrorMaker 2.0, and Kafka Connect, the tracing agent initializes a Jaeger tracer based on the tracing configuration defined in the resource. For the Kafka Bridge, a Jaeger tracer based on the tracing configuration defined in the resource is initialized by the Kafka Bridge itself. Procedure Perform these steps for each KafkaMirrorMaker , KafkaMirrorMaker2 , KafkaConnect , KafkaConnectS2I , and KafkaBridge resource. In the spec.template property, configure the Jaeger tracer service. For example: Jaeger tracer configuration for Kafka Connect apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... template: connectContainer: 1 env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: 2 type: jaeger #... Jaeger tracer configuration for MirrorMaker apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: #... template: mirrorMakerContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: type: jaeger #... Jaeger tracer configuration for MirrorMaker 2.0 apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: #... template: connectContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: type: jaeger #... Jaeger tracer configuration for the Kafka Bridge apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: #... template: bridgeContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: type: jaeger #... 1 Use the tracing environment variables as template configuration properties. 2 Set the spec.tracing.type property to jaeger . Create or update the resource: oc apply -f your-file Additional resources Section 13.2.40, " ContainerTemplate schema reference" Section 2.6, "Customizing OpenShift resources" | [
"<dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.1.0.redhat-00002</version> </dependency>",
"Tracer tracer = Configuration.fromEnv().getTracer();",
"GlobalTracer.register(tracer);",
"<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.15.redhat-00001</version> </dependency>",
"// Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer: TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); // Send: tracingProducer.send(...); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); // Subscribe: tracingConsumer.subscribe(Collections.singletonList(\"messages\")); // Get messages: ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); // Retrieve SpanContext from polled record (consumer side): ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"// Register the tracer with GlobalTracer: GlobalTracer.register(tracer); // Add the TracingProducerInterceptor to the sender properties: senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Send: producer.send(...); // Add the TracingConsumerInterceptor to the consumer properties: consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Subscribe: consumer.subscribe(Collections.singletonList(\"messages\")); // Get messages: ConsumerRecords<Integer, String> records = consumer.poll(1000); // Retrieve the SpanContext from a polled message (consumer side): ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"// Create a BiFunction for the KafkaProducer that operates on (String operationName, ProducerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ProducerRecord, String> producerSpanNameProvider = (operationName, producerRecord) -> \"CUSTOM_PRODUCER_NAME\"; // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer, producerSpanNameProvider); // Spans created by the tracingProducer will now have \"CUSTOM_PRODUCER_NAME\" as the span name. // Create a BiFunction for the KafkaConsumer that operates on (String operationName, ConsumerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ConsumerRecord, String> consumerSpanNameProvider = (operationName, consumerRecord) -> operationName.toUpperCase(); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer, passing in the consumerSpanNameProvider BiFunction: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer, consumerSpanNameProvider); // Spans created by the tracingConsumer will have the operation name as the span name, in upper-case. // \"receive\" -> \"RECEIVE\"",
"<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-streams</artifactId> <version>0.1.15.redhat-00001</version> </dependency>",
"KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer);",
"KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # template: connectContainer: 1 env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: 2 type: jaeger #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # template: mirrorMakerContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: type: jaeger #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: type: jaeger #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # template: bridgeContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: type: jaeger #",
"apply -f your-file"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_openshift/assembly-distributed-tracing-str |
Chapter 7. Management of monitoring stack using the Ceph Orchestrator | Chapter 7. Management of monitoring stack using the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrator with Cephadm in the backend to deploy monitoring and alerting stack. The monitoring stack consists of Prometheus, Prometheus exporters, Prometheus Alertmanager, and Grafana. Users need to either define these services with Cephadm in a YAML configuration file, or they can use the command line interface to deploy them. When multiple services of the same type are deployed, a highly-available setup is deployed. The node exporter is an exception to this rule. Note Red Hat Ceph Storage 6.0 does not support custom images for deploying monitoring services such as Prometheus, Grafana, Alertmanager, and node-exporter. The following monitoring services can be deployed with Cephadm: Prometheus is the monitoring and alerting toolkit. It collects the data provided by Prometheus exporters and fires preconfigured alerts if predefined thresholds have been reached. The Prometheus manager module provides a Prometheus exporter to pass on Ceph performance counters from the collection point in ceph-mgr . The Prometheus configuration, including scrape targets, such as metrics providing daemons, is set up automatically by Cephadm. Cephadm also deploys a list of default alerts, for example, health error, 10% OSDs down, or pgs inactive. Alertmanager handles alerts sent by the Prometheus server. It deduplicates, groups, and routes the alerts to the correct receiver. By default, the Ceph dashboard is automatically configured as the receiver. The Alertmanager handles alerts sent by the Prometheus server. Alerts can be silenced using the Alertmanager, but silences can also be managed using the Ceph Dashboard. Grafana is a visualization and alerting software. The alerting functionality of Grafana is not used by this monitoring stack. For alerting, the Alertmanager is used. By default, traffic to Grafana is encrypted with TLS. You can either supply your own TLS certificate or use a self-signed one. If no custom certificate has been configured before Grafana has been deployed, then a self-signed certificate is automatically created and configured for Grafana. Custom certificates for Grafana can be configured using the following commands: Syntax Node exporter is an exporter for Prometheus which provides data about the node on which it is installed. It is recommended to install the node exporter on all nodes. This can be done using the monitoring.yml file with the node-exporter service type. 7.1. Deploying the monitoring stack using the Ceph Orchestrator The monitoring stack consists of Prometheus, Prometheus exporters, Prometheus Alertmanager, Grafana, and Ceph Exporter. Ceph Dashboard makes use of these components to store and visualize detailed metrics on cluster usage and performance. You can deploy the monitoring stack using the service specification in YAML file format. All the monitoring services can have the network and port they bind to configured in the yml file. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the nodes. Procedure Enable the prometheus module in the Ceph Manager daemon. This exposes the internal Ceph metrics so that Prometheus can read them: Example Important Ensure this command is run before Prometheus is deployed. If the command was not run before the deployment, you must redeploy Prometheus to update the configuration: Navigate to the following directory: Syntax Example Note If the directory monitoring does not exist, create it. Create the monitoring.yml file: Example Edit the specification file with a content similar to the following example: Example Note Ensure the monitoring stack components alertmanager , prometheus , and grafana are deployed on the same host. The node-exporter and ceph-exporter components should be deployed on all the hosts. Apply monitoring services: Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example Important Prometheus, Grafana, and the Ceph dashboard are all automatically configured to talk to each other, resulting in a fully functional Grafana integration in the Ceph dashboard. 7.2. Removing the monitoring stack using the Ceph Orchestrator You can remove the monitoring stack using the ceph orch rm command. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Log into the Cephadm shell: Example Use the ceph orch rm command to remove the monitoring stack: Syntax Example Check the status of the process: Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See Deploying the monitoring stack using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information. | [
"ceph config-key set mgr/cephadm/grafana_key -i PRESENT_WORKING_DIRECTORY /key.pem ceph config-key set mgr/cephadm/grafana_crt -i PRESENT_WORKING_DIRECTORY /certificate.pem",
"ceph mgr module enable prometheus",
"ceph orch redeploy prometheus",
"cd /var/lib/ceph/ DAEMON_PATH /",
"cd /var/lib/ceph/monitoring/",
"touch monitoring.yml",
"service_type: prometheus service_name: prometheus placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: node-exporter --- service_type: alertmanager service_name: alertmanager placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: grafana service_name: grafana placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: ceph-exporter",
"ceph orch apply -i monitoring.yml",
"ceph orch ls",
"ceph orch ps --service_name= SERVICE_NAME",
"ceph orch ps --service_name=prometheus",
"cephadm shell",
"ceph orch rm SERVICE_NAME --force",
"ceph orch rm grafana ceph orch rm prometheus ceph orch rm node-exporter ceph orch rm ceph-exporter ceph orch rm alertmanager ceph mgr module disable prometheus",
"ceph orch status",
"ceph orch ls",
"ceph orch ps",
"ceph orch ps"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/operations_guide/management-of-monitoring-stack-using-the-ceph-orchestrator |
Chapter 8. Configuring authentication | Chapter 8. Configuring authentication This chapter covers several authentication topics. These topics include: Enforcing strict password and One Time Password (OTP) policies. Managing different credential types. Logging in with Kerberos. Disabling and enabling built-in credential types. 8.1. Password policies When Red Hat build of Keycloak creates a realm, it does not associate password policies with the realm. You can set a simple password with no restrictions on its length, security, or complexity. Simple passwords are unacceptable in production environments. Red Hat build of Keycloak has a set of password policies available through the Admin Console. Procedure Click Authentication in the menu. Click the Policies tab. Select the policy to add in the Add policy drop-down box. Enter a value that applies to the policy chosen. Click Save . Password policy After saving the policy, Red Hat build of Keycloak enforces the policy for new users. Note The new policy will not be effective for existing users. Therefore, make sure that you set the password policy from the beginning of the realm creation or add "Update password" to existing users or use "Expire password" to make sure that users update their passwords in "N" days, which will actually adjust to new password policies. 8.1.1. Password policy types 8.1.1.1. HashAlgorithm Passwords are not stored in cleartext. Before storage or validation, Red Hat build of Keycloak hashes passwords using standard hashing algorithms. PBKDF2 is the only built-in and default algorithm available. See the Server Developer Guide on how to add your own hashing algorithm. Note If you change the hashing algorithm, password hashes in storage will not change until the user logs in. 8.1.1.2. Hashing iterations Specifies the number of times Red Hat build of Keycloak hashes passwords before storage or verification. The default value is 210,000 in case that pbkdf2-sha512 is used as hashing algorithm, which is by default. If other hash algorithms are explicitly set by using the`HashAlgorithm` policy, the default count of hashing iterations could be different. For instance, it is 600,000 by default if the`pbkdf2-sha256` algorithm is used or 1,300,000 if the pbkdf2 algorithm (Algorithm pbkdf2 corresponds to PBKDF2 with HMAC-SHA1). Red Hat build of Keycloak hashes passwords to ensure that hostile actors with access to the password database cannot read passwords through reverse engineering. Note A high hashing iteration value can impact performance as it requires higher CPU power. 8.1.1.3. Digits The number of numerical digits required in the password string. 8.1.1.4. Lowercase characters The number of lower case letters required in the password string. 8.1.1.5. Uppercase characters The number of upper case letters required in the password string. 8.1.1.6. Special characters The number of special characters required in the password string. 8.1.1.7. Not username The password cannot be the same as the username. 8.1.1.8. Not email The password cannot be the same as the email address of the user. 8.1.1.9. Regular expression Password must match one or more defined Java regular expression patterns. See Java's regular expression documentation for the syntax of those expressions. 8.1.1.10. Expire password The number of days the password is valid. When the number of days has expired, the user must change their password. 8.1.1.11. Not recently used Password cannot be already used by the user. Red Hat build of Keycloak stores a history of used passwords. The number of old passwords stored is configurable in Red Hat build of Keycloak. 8.1.1.12. Password blacklist Password must not be in a blacklist file. Blacklist files are UTF-8 plain-text files with Unix line endings. Every line represents a blacklisted password. Red Hat build of Keycloak compares passwords in a case-insensitive manner. All passwords in the blacklist must be lowercase. The value of the blacklist file must be the name of the blacklist file, for example, 100k_passwords.txt . Blacklist files resolve against USD{kc.home.dir}/data/password-blacklists/ by default. Customize this path using: The keycloak.password.blacklists.path system property. The blacklistsPath property of the passwordBlacklist policy SPI configuration. To configure the blacklist folder using the CLI, use --spi-password-policy-password-blacklist-blacklists-path=/path/to/blacklistsFolder . A note about False Positives The current implementation uses a BloomFilter for fast and memory efficient containment checks, such as whether a given password is contained in a blacklist, with the possibility for false positives. By default a false positive probability of 0.01% is used. To change the false positive probability by CLI configuration, use --spi-password-policy-password-blacklist-false-positive-probability=0.00001 . 8.1.1.13. Maximum Authentication Age Specifies the maximum age of a user authentication in seconds with which the user can update a password without re-authentication. A value of 0 indicates that the user has to always re-authenticate with their current password before they can update the password. See AIA section for some additional details about this policy. 8.2. One Time Password (OTP) policies Red Hat build of Keycloak has several policies for setting up a FreeOTP or Google Authenticator One-Time Password generator. Procedure Click Authentication in the menu. Click the Policy tab. Click the OTP Policy tab. Otp Policy Red Hat build of Keycloak generates a QR code on the OTP set-up page, based on information configured in the OTP Policy tab. FreeOTP and Google Authenticator scan the QR code when configuring OTP. 8.2.1. Time-based or counter-based one time passwords The algorithms available in Red Hat build of Keycloak for your OTP generators are time-based and counter-based. With Time-Based One Time Passwords (TOTP), the token generator will hash the current time and a shared secret. The server validates the OTP by comparing the hashes within a window of time to the submitted value. TOTPs are valid for a short window of time. With Counter-Based One Time Passwords (HOTP), Red Hat build of Keycloak uses a shared counter rather than the current time. The Red Hat build of Keycloak server increments the counter with each successful OTP login. Valid OTPs change after a successful login. TOTP is more secure than HOTP because the matchable OTP is valid for a short window of time, while the OTP for HOTP is valid for an indeterminate amount of time. HOTP is more user-friendly than TOTP because no time limit exists to enter the OTP. HOTP requires a database update every time the server increments the counter. This update is a performance drain on the authentication server during heavy load. To increase efficiency, TOTP does not remember passwords used, so there is no need to perform database updates. The drawback is that it is possible to re-use TOTPs in the valid time interval. 8.2.2. TOTP configuration options 8.2.2.1. OTP hash algorithm The default algorithm is SHA1. The other, more secure options are SHA256 and SHA512. 8.2.2.2. Number of digits The length of the OTP. Short OTP's are user-friendly, easier to type, and easier to remember. Longer OTP's are more secure than shorter OTP's. 8.2.2.3. Look around window The number of intervals the server attempts to match the hash. This option is present in Red Hat build of Keycloak if the clock of the TOTP generator or authentication server becomes out-of-sync. The default value of 1 is adequate. For example, if the time interval for a token is 30 seconds, the default value of 1 means it will accept valid tokens in the 90-second window (time interval 30 seconds + look ahead 30 seconds + look behind 30 seconds). Every increment of this value increases the valid window by 60 seconds (look ahead 30 seconds + look behind 30 seconds). 8.2.2.4. OTP token period The time interval in seconds the server matches a hash. Each time the interval passes, the token generator generates a TOTP. 8.2.2.5. Reusable code Determine whether OTP tokens can be reused in the authentication process or user needs to wait for the token. Users cannot reuse those tokens by default, and the administrator needs to explicitly specify that those tokens can be reused. 8.2.3. HOTP configuration options 8.2.3.1. OTP hash algorithm The default algorithm is SHA1. The other, more secure options are SHA256 and SHA512. 8.2.3.2. Number of digits The length of the OTP. Short OTPs are user-friendly, easier to type, and easier to remember. Longer OTPs are more secure than shorter OTPs. 8.2.3.3. Look around window The number of and following intervals the server attempts to match the hash. This option is present in Red Hat build of Keycloak if the clock of the TOTP generator or authentication server become out-of-sync. The default value of 1 is adequate. This option is present in Red Hat build of Keycloak to cover when the user's counter gets ahead of the server. 8.2.3.4. Initial counter The value of the initial counter. 8.3. Authentication flows An authentication flow is a container of authentications, screens, and actions, during log in, registration, and other Red Hat build of Keycloak workflows. 8.3.1. Built-in flows Red Hat build of Keycloak has several built-in flows. You cannot modify these flows, but you can alter the flow's requirements to suit your needs. Procedure Click Authentication in the menu. Click on the Browser item in the list to see the details. Browser flow 8.3.1.1. Auth type The name of the authentication or the action to execute. If an authentication is indented, it is in a sub-flow. It may or may not be executed, depending on the behavior of its parent. Cookie The first time a user logs in successfully, Red Hat build of Keycloak sets a session cookie. If the cookie is already set, this authentication type is successful. Since the cookie provider returned success and each execution at this level of the flow is alternative , Red Hat build of Keycloak does not perform any other execution. This results in a successful login. Kerberos This authenticator is disabled by default and is skipped during the Browser Flow. Identity Provider Redirector This action is configured through the Actions > Config link. It redirects to another IdP for identity brokering . Forms Since this sub-flow is marked as alternative , it will not be executed if the Cookie authentication type passed. This sub-flow contains an additional authentication type that needs to be executed. Red Hat build of Keycloak loads the executions for this sub-flow and processes them. The first execution is the Username Password Form , an authentication type that renders the username and password page. It is marked as required , so the user must enter a valid username and password. The second execution is the Browser - Conditional OTP sub-flow. This sub-flow is conditional and executes depending on the result of the Condition - User Configured execution. If the result is true, Red Hat build of Keycloak loads the executions for this sub-flow and processes them. The execution is the Condition - User Configured authentication. This authentication checks if Red Hat build of Keycloak has configured other executions in the flow for the user. The Browser - Conditional OTP sub-flow executes only when the user has a configured OTP credential. The final execution is the OTP Form . Red Hat build of Keycloak marks this execution as required but it runs only when the user has an OTP credential set up because of the setup in the conditional sub-flow. If not, the user does not see an OTP form. 8.3.1.2. Requirement A set of radio buttons that control the execution of an action executes. 8.3.1.2.1. Required All Required elements in the flow must be successfully sequentially executed. The flow terminates if a required element fails. 8.3.1.2.2. Alternative Only a single element must successfully execute for the flow to evaluate as successful. Because the Required flow elements are sufficient to mark a flow as successful, any Alternative flow element within a flow containing Required flow elements will not execute. 8.3.1.2.3. Disabled The element does not count to mark a flow as successful. 8.3.1.2.4. Conditional This requirement type is only set on sub-flows. A Conditional sub-flow contains executions. These executions must evaluate to logical statements. If all executions evaluate as true , the Conditional sub-flow acts as Required . If any executions evaluate as false , the Conditional sub-flow acts as Disabled . If you do not set an execution, the Conditional sub-flow acts as Disabled . If a flow contains executions and the flow is not set to Conditional , Red Hat build of Keycloak does not evaluate the executions, and the executions are considered functionally Disabled . 8.3.2. Creating flows Important functionality and security considerations apply when you design a flow. To create a flow, perform the following: Procedure Click Authentication in the menu. Click Create flow . Note You can copy and then modify an existing flow. Click the "Action list" (the three dots at the end of the row), click Duplicate , and enter a name for the new flow. When creating a new flow, you must create a top-level flow first with the following options: Name The name of the flow. Description The description you can set to the flow. Top-Level Flow Type The type of flow. The type client is used only for the authentication of clients (applications). For all other cases, choose basic . Create a top-level flow When Red Hat build of Keycloak has created the flow, Red Hat build of Keycloak displays the Add step , and Add sub-flow buttons. An empty new flow Three factors determine the behavior of flows and sub-flows. The structure of the flow and sub-flows. The executions within the flows The requirements set within the sub-flows and the executions. Executions have a wide variety of actions, from sending a reset email to validating an OTP. Add executions with the Add step button. Adding an authentication execution Authentication executions can optionally have a reference value configured. This can be utilized by the Authentication Method Reference (AMR) protocol mapper to populate the amr claim in OIDC access and ID tokens (for more information on the AMR claim, see RFC-8176 ). When the Authentication Method Reference (AMR) protocol mapper is configured for a client, it will populate the amr claim with the reference value for any authenticator execution the user successfully completes during the authentication flow. Adding an authenticator reference value Two types of executions exist, automatic executions and interactive executions . Automatic executions are similar to the Cookie execution and will automatically perform their action in the flow. Interactive executions halt the flow to get input. Executions executing successfully set their status to success . For a flow to complete, it needs at least one execution with a status of success . You can add sub-flows to top-level flows with the Add sub-flow button. The Add sub-flow button displays the Create Execution Flow page. This page is similar to the Create Top Level Form page. The difference is that the Flow Type can be basic (default) or form . The form type constructs a sub-flow that generates a form for the user, similar to the built-in Registration flow. Sub-flows success depends on how their executions evaluate, including their contained sub-flows. See the execution requirements section for an in-depth explanation of how sub-flows work. Note After adding an execution, check the requirement has the correct value. All elements in a flow have a Delete option to the element. Some executions have a ⚙\ufe0f menu item (the gear icon) to configure the execution. It is also possible to add executions and sub-flows to sub-flows with the Add step and Add sub-flow links. Since the order of execution is important, you can move executions and sub-flows up and down by dragging their names. Warning Make sure to properly test your configuration when you configure the authentication flow to confirm that no security holes exist in your setup. We recommend that you test various corner cases. For example, consider testing the authentication behavior for a user when you remove various credentials from the user's account before authentication. As an example, when 2nd-factor authenticators, such as OTP Form or WebAuthn Authenticator, are configured in the flow as REQUIRED and the user does not have credential of particular type, the user will be able to set up the particular credential during authentication itself. This situation means that the user does not authenticate with this credential as he set up it right during the authentication. So for browser authentication, make sure to configure your authentication flow with some 1st-factor credentials such as Password or WebAuthn Passwordless Authenticator. 8.3.3. Creating a password-less browser login flow To illustrate the creation of flows, this section describes creating an advanced browser login flow. The purpose of this flow is to allow a user a choice between logging in using a password-less manner with WebAuthn , or two-factor authentication with a password and OTP. Procedure Click Authentication in the menu. Click the Flows tab. Click Create flow . Enter Browser Password-less as a name. Click Create . Click Add execution . Select Cookie from the list. Click Add . Select Alternative for the Cookie authentication type to set its requirement to alternative. Click Add step . Select Kerberos from the list. Click Add . Click Add step . Select Identity Provider Redirector from the list. Click Add . Select Alternative for the Identity Provider Redirector authentication type to set its requirement to alternative. Click Add sub-flow . Enter Forms as a name. Click Add . Select Alternative for the Forms authentication type to set its requirement to alternative. The common part with the browser flow Click + menu of the Forms execution. Select Add step . Select Username Form from the list. Click Add . At this stage, the form requires a username but no password. We must enable password authentication to avoid security risks. Click + menu of the Forms sub-flow. Click Add sub-flow . Enter Authentication as name. Click Add . Select Required for the Authentication authentication type to set its requirement to required. Click + menu of the Authentication sub-flow. Click Add step . Select WebAuthn Passwordless Authenticator from the list. Click Add . Select Alternative for the Webauthn Passwordless Authenticator authentication type to set its requirement to alternative. Click + menu of the Authentication sub-flow. Click Add sub-flow . Enter Password with OTP as name. Click Add . Select Alternative for the Password with OTP authentication type to set its requirement to alternative. Click + menu of the Password with OTP sub-flow. Click Add step . Select Password Form from the list. Click Add . Select Required for the Password Form authentication type to set its requirement to required. Click + menu of the Password with OTP sub-flow. Click Add step . Select OTP Form from the list. Click Add . Click Required for the OTP Form authentication type to set its requirement to required. Finally, change the bindings. Click the Action menu at the top of the screen. Select Bind flow from the menu. Click the Browser Flow drop-down list. Click Save . A password-less browser login After entering the username, the flow works as follows: If users have WebAuthn passwordless credentials recorded, they can use these credentials to log in directly. This is the password-less login. The user can also select Password with OTP because the WebAuthn Passwordless execution and the Password with OTP flow are set to Alternative . If they are set to Required , the user has to enter WebAuthn, password, and OTP. If the user selects the Try another way link with WebAuthn passwordless authentication, the user can choose between Password and Passkey (WebAuthn passwordless). When selecting the password, the user will need to continue and log in with the assigned OTP. If the user has no WebAuthn credentials, the user must enter the password and then the OTP. If the user has no OTP credential, they will be asked to record one. Note Since the WebAuthn Passwordless execution is set to Alternative rather than Required , this flow will never ask the user to register a WebAuthn credential. For a user to have a Webauthn credential, an administrator must add a required action to the user. Do this by: Enabling the Webauthn Register Passwordless required action in the realm (see the WebAuthn documentation). Setting the required action using the Credential Reset part of a user's Credentials management menu. Creating an advanced flow such as this can have side effects. For example, if you enable the ability to reset the password for users, this would be accessible from the password form. In the default Reset Credentials flow, users must enter their username. Since the user has already entered a username earlier in the Browser Password-less flow, this action is unnecessary for Red Hat build of Keycloak and suboptimal for user experience. To correct this problem, you can: Duplicate the Reset Credentials flow. Set its name to Reset Credentials for password-less , for example. Click Delete (trash icon) of the Choose user step. In the Action menu, select Bind flow and select Reset credentials flow from the dropdown and click Save 8.3.4. Creating a browser login flow with step-up mechanism This section describes how to create advanced browser login flow using the step-up mechanism. The purpose of step-up authentication is to allow access to clients or resources based on a specific authentication level of a user. Procedure Click Authentication in the menu. Click the Flows tab. Click Create flow . Enter Browser Incl Step up Mechanism as a name. Click Save . Click Add execution . Select Cookie from the list. Click Add . Select Alternative for the Cookie authentication type to set its requirement to alternative. Click Add sub-flow . Enter Auth Flow as a name. Click Add . Click Alternative for the Auth Flow authentication type to set its requirement to alternative. Now you configure the flow for the first authentication level. Click + menu of the Auth Flow . Click Add sub-flow . Enter 1st Condition Flow as a name. Click Add . Click Conditional for the 1st Condition Flow authentication type to set its requirement to conditional. Click + menu of the 1st Condition Flow . Click Add condition . Select Conditional - Level Of Authentication from the list. Click Add . Click Required for the Conditional - Level Of Authentication authentication type to set its requirement to required. Click ⚙\ufe0f (gear icon). Enter Level 1 as an alias. Enter 1 for the Level of Authentication (LoA). Set Max Age to 36000 . This value is in seconds and it is equivalent to 10 hours, which is the default SSO Session Max timeout set in the realm. As a result, when a user authenticates with this level, subsequent SSO logins can re-use this level and the user does not need to authenticate with this level until the end of the user session, which is 10 hours by default. Click Save Configure the condition for the first authentication level Click + menu of the 1st Condition Flow . Click Add step . Select Username Password Form from the list. Click Add . Now you configure the flow for the second authentication level. Click + menu of the Auth Flow . Click Add sub-flow . Enter 2nd Condition Flow as an alias. Click Add . Click Conditional for the 2nd Condition Flow authentication type to set its requirement to conditional. Click + menu of the 2nd Condition Flow . Click Add condition . Select Conditional - Level Of Authentication from the item list. Click Add . Click Required for the Conditional - Level Of Authentication authentication type to set its requirement to required. Click ⚙\ufe0f (gear icon). Enter Level 2 as an alias. Enter 2 for the Level of Authentication (LoA). Set Max Age to 0 . As a result, when a user authenticates, this level is valid just for the current authentication, but not any subsequent SSO authentications. So the user will always need to authenticate again with this level when this level is requested. Click Save Configure the condition for the second authentication level Click + menu of the 2nd Condition Flow . Click Add step . Select OTP Form from the list. Click Add . Click Required for the OTP Form authentication type to set its requirement to required. Finally, change the bindings. Click the Action menu at the top of the screen. Select Bind flow from the list. Select Browser Flow in the dropdown. Click Save . Browser login with step-up mechanism Request a certain authentication level To use the step-up mechanism, you specify a requested level of authentication (LoA) in your authentication request. The claims parameter is used for this purpose: The claims parameter is specified in a JSON representation: The Red Hat build of Keycloak javascript adapter has support for easy construct of this JSON and sending it in the login request. See Javascript adapter documentation for more details. You can also use simpler parameter acr_values instead of claims parameter to request particular levels as non-essential. This is mentioned in the OIDC specification. You can also configure the default level for the particular client, which is used when the parameter acr_values or the parameter claims with the acr claim is not present. For further details, see Client ACR configuration ). Note To request the acr_values as text (such as gold ) instead of a numeric value, you configure the mapping between the ACR and the LoA. It is possible to configure it at the realm level (recommended) or at the client level. For configuration see ACR to LoA Mapping . For more details see the official OIDC specification . Flow logic The logic for the configured authentication flow is as follows: If a client request a high authentication level, meaning Level of Authentication 2 (LoA 2), a user has to perform full 2-factor authentication: Username/Password + OTP. However, if a user already has a session in Red Hat build of Keycloak, that was logged in with username and password (LoA 1), the user is only asked for the second authentication factor (OTP). The option Max Age in the condition determines how long (how much seconds) the subsequent authentication level is valid. This setting helps to decide whether the user will be asked to present the authentication factor again during a subsequent authentication. If the particular level X is requested by the claims or acr_values parameter and user already authenticated with level X, but it is expired (for example max age is configured to 300 and user authenticated before 310 seconds) then the user will be asked to re-authenticate again with the particular level. However if the level is not yet expired, the user will be automatically considered as authenticated with that level. Using Max Age with the value 0 means, that particular level is valid just for this single authentication. Hence every re-authentication requesting that level will need to authenticate again with that level. This is useful for operations that require higher security in the application (e.g. send payment) and always require authentication with the specific level. Warning Note that parameters such as claims or acr_values might be changed by the user in the URL when the login request is sent from the client to the Red Hat build of Keycloak via the user's browser. This situation can be mitigated if client uses PAR (Pushed authorization request), a request object, or other mechanisms that prevents the user from rewrite the parameters in the URL. Hence after the authentication, clients are encouraged to check the ID Token to double-check that acr in the token corresponds to the expected level. If no explicit level is requested by parameters, the Red Hat build of Keycloak will require the authentication with the first LoA condition found in the authentication flow, such as the Username/Password in the preceding example. When a user was already authenticated with that level and that level expired, the user is not required to re-authenticate, but acr in the token will have the value 0. This result is considered as authentication based solely on long-lived browser cookie as mentioned in the section 2 of OIDC Core 1.0 specification. Note A conflict situation may arise when an admin specifies several flows, sets different LoA levels to each, and assigns the flows to different clients. However, the rule is always the same: if a user has a certain level, it needs only have that level to connect to a client. It's up to the admin to make sure that the LoA is coherent. Example scenario Max Age is configured as 300 seconds for level 1 condition. Login request is sent without requesting any acr. Level 1 will be used and the user needs to authenticate with username and password. The token will have acr=1 . Another login request is sent after 100 seconds. The user is automatically authenticated due to the SSO and the token will return acr=1 . Another login request is sent after another 201 seconds (301 seconds since authentication in point 2). The user is automatically authenticated due to the SSO, but the token will return acr=0 due the level 1 is considered expired. Another login request is sent, but now it will explicitly request ACR of level 1 in the claims parameter. User will be asked to re-authenticate with username/password and then acr=1 will be returned in the token. ACR claim in the token ACR claim is added to the token by the acr loa level protocol mapper defined in the acr client scope. This client scope is the realm default client scope and hence will be added to all newly created clients in the realm. In case you do not want acr claim inside tokens or you need some custom logic for adding it, you can remove the client scope from your client. Note when the login request initiates a request with the claims parameter requesting acr as essential claim, then Red Hat build of Keycloak will always return one of the specified levels. If it is not able to return one of the specified levels (For example if the requested level is unknown or bigger than configured conditions in the authentication flow), then Red Hat build of Keycloak will throw an error. 8.3.5. Registration or Reset credentials requested by client Usually when the user is redirected to the Red Hat build of Keycloak from client application, the browser flow is triggered. This flow may allow the user to register in case that realm registration is enabled and the user clicks Register on the login screen. Also, if Forget password is enabled for the realm, the user can click Forget password on the login screen, which triggers the Reset credentials flow where users can reset credentials after email address confirmation. Sometimes it can be useful for the client application to directly redirect the user to the Registration screen or to the Reset credentials flow. The resulting action will match the action of when the user clicks Register or Forget password on the normal login screen. Automatic redirect to the registration or reset-credentials screen can be done as follows: When the client wants the user to be redirected directly to the registration, the OIDC client should replace the very last snippet from the OIDC login URL path ( /auth ) with /registrations . So the full URL might be similar to the following: https://keycloak.example.com/realms/your_realm/protocol/openid-connect/registrations . When the client wants a user to be redirected directly to the Reset credentials flow, the OIDC client should replace the very last snippet from the OIDC login URL path ( /auth ) with /forgot-credentials . Warning The preceding steps are the only supported method for a client to directly request a registration or reset-credentials flow. For security purposes, it is not supported and recommended for client applications to bypass OIDC/SAML flows and directly redirect to other Red Hat build of Keycloak endpoints (such as for instance endpoints under /realms/realm_name/login-actions or /realms/realm_name/broker ). 8.4. User session limits Limits on the number of session that a user can have can be configured. Sessions can be limited per realm or per client. To add session limits to a flow, perform the following steps. Click Add step for the flow. Select User session count limiter from the item list. Click Add . Click Required for the User Session Count Limiter authentication type to set its requirement to required. Click ⚙\ufe0f (gear icon) for the User Session Count Limiter . Enter an alias for this config. Enter the required maximum number of sessions that a user can have in this realm. For example, if 2 is the value, 2 SSO sessions is the maximum that each user can have in this realm. If 0 is the value, this check is disabled. Enter the required maximum number of sessions a user can have for the client. For example, if 2 is the value, then 2 SSO sessions is the maximum in this realm for each client. So when a user is trying to authenticate to client foo , but that user has already authenticated in 2 SSO sessions to client foo , either the authentication will be denied or an existing sessions will be killed based on the behavior configured. If a value of 0 is used, this check is disabled. If both session limits and client session limits are enabled, it makes sense to have client session limits to be always lower than session limits. The limit per client can never exceed the limit of all SSO sessions of this user. Select the behavior that is required when the user tries to create a session after the limit is reached. Available behaviors are: Deny new session - when a new session is requested and the session limit is reached, no new sessions can be created. Terminate oldest session - when a new session is requested and the session limit has been reached, the oldest session will be removed and the new session created. Optionally, add a custom error message to be displayed when the limit is reached. Note that the user session limits should be added to your bound Browser flow , Direct grant flow , Reset credentials and also to any Post broker login flow . The authenticator should be added at the point when the user is already known during authentication (usually at the end of the authentication flow) and should be typically REQUIRED. Note that it is not possible to have ALTERNATIVE and REQUIRED executions at the same level. For most of authenticators like Direct grant flow , Reset credentials or Post broker login flow , it is recommended to add the authenticator as REQUIRED at the end of the authentication flow. Here is an example for the Reset credentials flow: For Browser flow, consider not adding the Session Limits authenticator at the top level flow. This recommendation is due to the Cookie authenticator, which automatically re-authenticates users based on SSO cookie. It is at the top level and it is better to not check session limits during SSO re-authentication because a user session already exists. So instead, consider adding a separate ALTERNATIVE subflow, such as the following authenticate-user-with-session-limit example at the same level like Cookie . Then you can add a REQUIRED subflow, in the following real-authentication-subflow`example, as a nested subflow of `authenticate-user-with-session-limit and add a User Session Limit at the same level as well. Inside the real-authentication-subflow , you can add real authenticators in a similar fashion to the default browser flow. The following example flow allows to users to authenticate with an identity provider or with password and OTP: Regarding Post Broker login flow , you can add the User Session Limits as the only authenticator in the authentication flow as long as you have no other authenticators that you trigger after authentication with your identity provider. However, make sure that this flow is configured as Post Broker Flow at your identity providers. This requirement exists needed so that the authentication with Identity providers also participates in the session limits. Note Currently, the administrator is responsible for maintaining consistency between the different configurations. So make sure that all your flows use same the configuration of User Session Limits . Note User session limit feature is not available for CIBA. 8.5. Kerberos Red Hat build of Keycloak supports login with a Kerberos ticket through the Simple and Protected GSSAPI Negotiation Mechanism (SPNEGO) protocol. SPNEGO authenticates transparently through the web browser after the user authenticates the session. For non-web cases, or when a ticket is not available during login, Red Hat build of Keycloak supports login with Kerberos username and password. A typical use case for web authentication is the following: The user logs into the desktop. The user accesses a web application secured by Red Hat build of Keycloak using a browser. The application redirects to Red Hat build of Keycloak login. Red Hat build of Keycloak renders the HTML login screen with status 401 and HTTP header WWW-Authenticate: Negotiate If the browser has a Kerberos ticket from desktop login, the browser transfers the desktop sign-on information to Red Hat build of Keycloak in header Authorization: Negotiate 'spnego-token' . Otherwise, it displays the standard login screen, and the user enters the login credentials. Red Hat build of Keycloak validates the token from the browser and authenticates the user. If using LDAPFederationProvider with Kerberos authentication support, Red Hat build of Keycloak provisions user data from LDAP. If using KerberosFederationProvider, Red Hat build of Keycloak lets the user update the profile and pre-fill login data. Red Hat build of Keycloak returns to the application. Red Hat build of Keycloak and the application communicate through OpenID Connect or SAML messages. Red Hat build of Keycloak acts as a broker to Kerberos/SPNEGO login. Therefore Red Hat build of Keycloak authenticating through Kerberos is hidden from the application. Warning The Negotiate www-authenticate scheme allows NTLM as a fallback to Kerberos and on some web browsers in Windows NTLM is supported by default. If a www-authenticate challenge comes from a server outside a browsers permitted list, users may encounter an NTLM dialog prompt. A user would need to click the cancel button on the dialog to continue as Red Hat build of Keycloak does not support this mechanism. This situation can happen if Intranet web browsers are not strictly configured or if Red Hat build of Keycloak serves users in both the Intranet and Internet. A custom authenticator can be used to restrict Negotiate challenges to a whitelist of hosts. Perform the following steps to set up Kerberos authentication: The setup and configuration of the Kerberos server (KDC). The setup and configuration of the Red Hat build of Keycloak server. The setup and configuration of the client machines. 8.5.1. Setup of Kerberos server The steps to set up a Kerberos server depends on the operating system (OS) and the Kerberos vendor. Consult Windows Active Directory, MIT Kerberos, and your OS documentation for instructions on setting up and configuring a Kerberos server. During setup, perform these steps: Add some user principals to your Kerberos database. You can also integrate your Kerberos with LDAP, so user accounts provision from the LDAP server. Add service principal for "HTTP" service. For example, if the Red Hat build of Keycloak server runs on www.mydomain.org , add the service principal HTTP/www.mydomain.org@<kerberos realm> . On MIT Kerberos, you run a "kadmin" session. On a machine with MIT Kerberos, you can use the command: Then, add HTTP principal and export its key to a keytab file with commands such as: Ensure the keytab file /tmp/http.keytab is accessible on the host where Red Hat build of Keycloak is running. 8.5.2. Setup and configuration of Red Hat build of Keycloak server Install a Kerberos client on your machine. Procedure Install a Kerberos client. If your machine runs Fedora, Ubuntu, or RHEL, install the freeipa-client package, containing a Kerberos client and other utilities. Configure the Kerberos client (on Linux, the configuration settings are in the /etc/krb5.conf file ). Add your Kerberos realm to the configuration and configure the HTTP domains your server runs on. For example, for the MYDOMAIN.ORG realm, you can configure the domain_realm section like this: Export the keytab file with the HTTP principal and ensure the file is accessible to the process running the Red Hat build of Keycloak server. For production, ensure that the file is readable by this process only. For the MIT Kerberos example above, we exported keytab to the /tmp/http.keytab file. If your Key Distribution Centre (KDC) and Red Hat build of Keycloak run on the same host, the file is already available. 8.5.2.1. Enabling SPNEGO processing By default, Red Hat build of Keycloak disables SPNEGO protocol support. To enable it, go to the browser flow and enable Kerberos . Browser flow Set the Kerberos requirement from disabled to alternative (Kerberos is optional) or required (browser must have Kerberos enabled). If you have not configured the browser to work with SPNEGO or Kerberos, Red Hat build of Keycloak falls back to the regular login screen. 8.5.2.2. Configure Kerberos user storage federation providers You must now use User Storage Federation to configure how Red Hat build of Keycloak interprets Kerberos tickets. Two different federation providers exist with Kerberos authentication support. To authenticate with Kerberos backed by an LDAP server, configure the LDAP Federation Provider . Procedure Go to the configuration page for your LDAP provider. Ldap kerberos integration Toggle Allow Kerberos authentication to ON Allow Kerberos authentication makes Red Hat build of Keycloak use the Kerberos principal access user information so information can import into the Red Hat build of Keycloak environment. If an LDAP server is not backing up your Kerberos solution, use the Kerberos User Storage Federation Provider. Procedure Click User Federation in the menu. Select Kerberos from the Add provider select box. Kerberos user storage provider The Kerberos provider parses the Kerberos ticket for simple principal information and imports the information into the local Red Hat build of Keycloak database. User profile information, such as first name, last name, and email, are not provisioned. 8.5.3. Setup and configuration of client machines Client machines must have a Kerberos client and set up the krb5.conf as described above . The client machines must also enable SPNEGO login support in their browser. See configuring Firefox for Kerberos if you are using the Firefox browser. The .mydomain.org URI must be in the network.negotiate-auth.trusted-uris configuration option. In Windows domains, clients do not need to adjust their configuration. Internet Explorer and Edge can already participate in SPNEGO authentication. 8.5.4. Credential delegation Kerberos supports the credential delegation. Applications may need access to the Kerberos ticket so they can re-use it to interact with other services secured by Kerberos. Because the Red Hat build of Keycloak server processed the SPNEGO protocol, you must propagate the GSS credential to your application within the OpenID Connect token claim or a SAML assertion attribute. Red Hat build of Keycloak transmits this to your application from the Red Hat build of Keycloak server. To insert this claim into the token or assertion, each application must enable the built-in protocol mapper gss delegation credential . This mapper is available in the Mappers tab of the application's client page. See Protocol Mappers chapter for more details. Applications must deserialize the claim it receives from Red Hat build of Keycloak before using it to make GSS calls against other services. When you deserialize the credential from the access token to the GSSCredential object, create the GSSContext with this credential passed to the GSSManager.createContext method. For example: // Obtain accessToken in your application. KeycloakPrincipal keycloakPrincipal = (KeycloakPrincipal) servletReq.getUserPrincipal(); AccessToken accessToken = keycloakPrincipal.getKeycloakSecurityContext().getToken(); // Retrieve Kerberos credential from accessToken and deserialize it String serializedGssCredential = (String) accessToken.getOtherClaims(). get(org.keycloak.common.constants.KerberosConstants.GSS_DELEGATION_CREDENTIAL); GSSCredential deserializedGssCredential = org.keycloak.common.util.KerberosSerializationUtils. deserializeCredential(serializedGssCredential); // Create GSSContext to call other Kerberos-secured services GSSContext context = gssManager.createContext(serviceName, krb5Oid, deserializedGssCredential, GSSContext.DEFAULT_LIFETIME); Note Configure forwardable Kerberos tickets in krb5.conf file and add support for delegated credentials to your browser. Warning Credential delegation has security implications, so use it only if necessary and only with HTTPS. See this article for more details and an example. 8.5.5. Cross-realm trust In the Kerberos protocol, the realm is a set of Kerberos principals. The definition of these principals exists in the Kerberos database, which is typically an LDAP server. The Kerberos protocol allows cross-realm trust. For example, if 2 Kerberos realms, A and B, exist, then cross-realm trust will allow the users from realm A to access realm B's resources. Realm B trusts realm A. Kerberos cross-realm trust The Red Hat build of Keycloak server supports cross-realm trust. To implement this, perform the following: Configure the Kerberos servers for the cross-realm trust. Implementing this step depends on the Kerberos server implementations. This step is necessary to add the Kerberos principal krbtgt/B@A to the Kerberos databases of realm A and B. This principal must have the same keys on both Kerberos realms. The principals must have the same password, key version numbers, and ciphers in both realms. Consult the Kerberos server documentation for more details. Note The cross-realm trust is unidirectional by default. You must add the principal krbtgt/A@B to both Kerberos databases for bidirectional trust between realm A and realm B. However, trust is transitive by default. If realm B trusts realm A and realm C trusts realm B, then realm C trusts realm A without the principal, krbtgt/C@A , available. Additional configuration (for example, capaths ) may be necessary on the Kerberos client-side so clients can find the trust path. Consult the Kerberos documentation for more details. Configure Red Hat build of Keycloak server When using an LDAP storage provider with Kerberos support, configure the server principal for realm B, as in this example: HTTP/mydomain.com@B . The LDAP server must find the users from realm A if users from realm A are to successfully authenticate to Red Hat build of Keycloak, because Red Hat build of Keycloak must perform the SPNEGO flow and then find the users. Finding users is based on the LDAP storage provider option Kerberos principal attribute . When this is configured for instance with value like userPrincipalName , then after SPNEGO authentication of user john@A , Red Hat build of Keycloak will try to lookup LDAP user with attribute userPrincipalName equivalent to john@A . If Kerberos principal attribute is left empty, then Red Hat build of Keycloak will lookup the LDAP user based on the prefix of his kerberos principal with the realm omitted. For example, Kerberos principal user john@A must be available in the LDAP under username john , so typically under an LDAP DN such as uid=john,ou=People,dc=example,dc=com . If you want users from realm A and B to authenticate, ensure that LDAP can find users from both realms A and B. When using a Kerberos user storage provider (typically, Kerberos without LDAP integration), configure the server principal as HTTP/mydomain.com@B , and users from Kerberos realms A and B must be able to authenticate. Users from multiple Kerberos realms are allowed to authenticate as every user would have attribute KERBEROS_PRINCIPAL referring to the kerberos principal used for authentication and this is used for further lookups of this user. To avoid conflicts when there is user john in both kerberos realms A and B , the username of the Red Hat build of Keycloak user might contain the kerberos realm lowercased. For instance username would be john@a . Just in case when realm matches with the configured Kerberos realm , the realm suffix might be omitted from the generated username. For instance username would be john for the Kerberos principal john@A as long as the Kerberos realm is configured on the Kerberos provider is A . 8.5.6. Troubleshooting If you have issues, enable additional logging to debug the problem: Enable Debug flag in the Admin Console for Kerberos or LDAP federation providers Enable TRACE logging for category org.keycloak to receive more information in server logs Add system properties -Dsun.security.krb5.debug=true and -Dsun.security.spnego.debug=true 8.6. X.509 client certificate user authentication Red Hat build of Keycloak supports logging in with an X.509 client certificate if you have configured the server to use mutual SSL authentication. A typical workflow: A client sends an authentication request over SSL/TLS channel. During the SSL/TLS handshake, the server and the client exchange their x.509/v3 certificates. The container (JBoss EAP) validates the certificate PKIX path and the certificate expiration date. The x.509 client certificate authenticator validates the client certificate by using the following methods: Checks the certificate revocation status by using CRL or CRL Distribution Points. Checks the Certificate revocation status by using OCSP (Online Certificate Status Protocol). Validates whether the key in the certificate matches the expected key. Validates whether the extended key in the certificate matches the expected extended key. If any of the these checks fail, the x.509 authentication fails. Otherwise, the authenticator extracts the certificate identity and maps it to an existing user. When the certificate maps to an existing user, the behavior diverges depending on the authentication flow: In the Browser Flow, the server prompts users to confirm their identity or sign in with a username and password. In the Direct Grant Flow, the server signs in the user. Important Note that it is the responsibility of the web container to validate certificate PKIX path. X.509 authenticator on the Red Hat build of Keycloak side provides just the additional support for check the certificate expiration, certificate revocation status and key usage. If you are using Red Hat build of Keycloak deployed behind reverse proxy, make sure that your reverse proxy is configured to validate PKIX path. If you do not use reverse proxy and users directly access the JBoss EAP, you should be fine as JBoss EAP makes sure that PKIX path is validated as long as it is configured as described below. 8.6.1. Features Supported Certificate Identity Sources: Match SubjectDN by using regular expressions X500 Subject's email attribute X500 Subject's email from Subject Alternative Name Extension (RFC822Name General Name) X500 Subject's other name from Subject Alternative Name Extension. This other name is the User Principal Name (UPN), typically. X500 Subject's Common Name attribute Match IssuerDN by using regular expressions Certificate Serial Number Certificate Serial Number and IssuerDN SHA-256 Certificate thumbprint Full certificate in PEM format 8.6.1.1. Regular expressions Red Hat build of Keycloak extracts the certificate identity from Subject DN or Issuer DN by using a regular expression as a filter. For example, this regular expression matches the email attribute: The regular expression filtering applies if the Identity Source is set to either Match SubjectDN using regular expression or Match IssuerDN using regular expression . 8.6.1.1.1. Mapping certificate identity to an existing user The certificate identity mapping can map the extracted user identity to an existing user's username, email, or a custom attribute whose value matches the certificate identity. For example, setting Identity source to Subject's email or User mapping method to Username or email makes the X.509 client certificate authenticator use the email attribute in the certificate's Subject DN as the search criteria when searching for an existing user by username or by email. Important If you disable Login with email at realm settings, the same rules apply to certificate authentication. Users are unable to log in by using the email attribute. Using Certificate Serial Number and IssuerDN as an identity source requires two custom attributes for the serial number and the IssuerDN. SHA-256 Certificate thumbprint is the lowercase hexadecimal representation of SHA-256 certificate thumbprint. Using Full certificate in PEM format as an identity source is limited to the custom attributes mapped to external federation sources, such as LDAP. Red Hat build of Keycloak cannot store certificates in its database due to length limitations, so in the case of LDAP, you must enable Always Read Value From LDAP . 8.6.1.1.2. Extended certificate validation Revocation status checking using CRL. Revocation status checking using CRL/Distribution Point. Revocation status checking using OCSP/Responder URI. Certificate KeyUsage validation. Certificate ExtendedKeyUsage validation. 8.6.2. Adding X.509 client certificate authentication to browser flows Click Authentication in the menu. Click the Browser flow. From the Action list, select Duplicate . Enter a name for the copy. Click Duplicate . Click Add step . Click "X509/Validate Username Form". Click Add . X509 execution Click and drag the "X509/Validate Username Form" over the "Browser Forms" execution. Set the requirement to "ALTERNATIVE". X509 browser flow Click the Action menu. Click the Bind flow . Click the Browser flow from the drop-down list. Click Save . X509 browser flow bindings 8.6.3. Configuring X.509 client certificate authentication X509 configuration User Identity Source Defines the method for extracting the user identity from a client certificate. Canonical DN representation enabled Defines whether to use canonical format to determine a distinguished name. The official Java API documentation describes the format. This option affects the two User Identity Sources Match SubjectDN using regular expression and Match IssuerDN using regular expression only. Enable this option when you set up a new Red Hat build of Keycloak instance. Disable this option to retain backward compatibility with existing Red Hat build of Keycloak instances. Enable Serial Number hexadecimal representation Represent the serial number as hexadecimal. The serial number with the sign bit set to 1 must be left padded with 00 octet. For example, a serial number with decimal value 161 , or a1 in hexadecimal representation is encoded as 00a1 , according to RFC5280. See RFC5280, appendix-B for more details. A regular expression A regular expression to use as a filter for extracting the certificate identity. The expression must contain a single group. User Mapping Method Defines the method to match the certificate identity with an existing user. Username or email searches for existing users by username or email. Custom Attribute Mapper searches for existing users with a custom attribute that matches the certificate identity. The name of the custom attribute is configurable. A name of user attribute A custom attribute whose value matches against the certificate identity. Use multiple custom attributes when attribute mapping is related to multiple values, For example, 'Certificate Serial Number and IssuerDN'. CRL Checking Enabled Check the revocation status of the certificate by using the Certificate Revocation List. The location of the list is defined in the CRL file path attribute. Enable CRL Distribution Point to check certificate revocation status Use CDP to check the certificate revocation status. Most PKI authorities include CDP in their certificates. CRL file path The path to a file containing a CRL list. The value must be a path to a valid file if the CRL Checking Enabled option is enabled. OCSP Checking Enabled Checks the certificate revocation status by using Online Certificate Status Protocol. OCSP Fail-Open Behavior By default the OCSP check must return a positive response in order to continue with a successful authentication. Sometimes however this check can be inconclusive: for example, the OCSP server could be unreachable, overloaded, or the client certificate may not contain an OCSP responder URI. When this setting is turned ON, authentication will be denied only if an explicit negative response is received by the OCSP responder and the certificate is definitely revoked. If a valid OCSP response is not available the authentication attempt will be accepted. OCSP Responder URI Override the value of the OCSP responder URI in the certificate. Validate Key Usage Verifies the certificate's KeyUsage extension bits are set. For example, "digitalSignature,KeyEncipherment" verifies if bits 0 and 2 in the KeyUsage extension are set. Leave this parameter empty to disable the Key Usage validation. See RFC5280, Section-4.2.1.3 for more information. Red Hat build of Keycloak raises an error when a key usage mismatch occurs. Validate Extended Key Usage Verifies one or more purposes defined in the Extended Key Usage extension. See RFC5280, Section-4.2.1.12 for more information. Leave this parameter empty to disable the Extended Key Usage validation. Red Hat build of Keycloak raises an error when flagged as critical by the issuing CA and a key usage extension mismatch occurs. Validate Certificate Policy Verifies one or more policy OIDs as defined in the Certificate Policy extension. See RFC5280, Section-4.2.1.4 . Leave the parameter empty to disable the Certificate Policy validation. Multiple policies should be separated using a comma. Certificate Policy Validation Mode When more than one policy is specified in the Validate Certificate Policy setting, it decides whether the matching should check for all requested policies to be present, or one match is enough for a successful authentication. Default value is All , meaning that all requested policies should be present in the client certificate. Bypass identity confirmation If enabled, X.509 client certificate authentication does not prompt the user to confirm the certificate identity. Red Hat build of Keycloak signs in the user upon successful authentication. Revalidate client certificate If set, the client certificate trust chain will be always verified at the application level using the certificates present in the configured trust store. This can be useful if the underlying web server does not enforce client certificate chain validation, for example because it is behind a non-validating load balancer or reverse proxy, or when the number of allowed CAs is too large for the mutual SSL negotiation (most browsers cap the maximum SSL negotiation packet size at 32767 bytes, which corresponds to about 200 advertised CAs). By default this option is off. 8.6.4. Adding X.509 Client Certificate Authentication to a Direct Grant Flow Click Authentication in the menu. Select Duplicate from the "Action list" to make a copy of the built-in "Direct grant" flow. Enter a name for the copy. Click Duplicate . Click the created flow. Click the trash can icon 🗑\ufe0f of the "Username Validation" and click Delete . Click the trash can icon 🗑\ufe0f of the "Password" and click Delete . Click Add step . Click "X509/Validate Username". Click Add . X509 direct grant execution Set up the x509 authentication configuration by following the steps described in the x509 Browser Flow section. Click the Bindings tab. Click the Direct Grant Flow drop-down list. Click the newly created "x509 Direct Grant" flow. Click Save . X509 direct grant flow bindings 8.7. W3C Web Authentication (WebAuthn) Red Hat build of Keycloak provides support for W3C Web Authentication (WebAuthn) . Red Hat build of Keycloak works as a WebAuthn's Relying Party (RP) . Note WebAuthn's operations success depends on the user's WebAuthn supporting authenticator, browser, and platform. Make sure your authenticator, browser, and platform support the WebAuthn specification. 8.7.1. Setup The setup procedure of WebAuthn support for 2FA is the following: 8.7.1.1. Enable WebAuthn authenticator registration Click Authentication in the menu. Click the Required Actions tab. Toggle the Webauthn Register switch to ON . Toggle the Default Action switch to ON if you want all new users to be required to register their WebAuthn credentials. 8.7.2. Adding WebAuthn authentication to a browser flow Click Authentication in the menu. Click the Browser flow. Select Duplicate from the "Action list" to make a copy of the built-in Browser flow. Enter "WebAuthn Browser" as the name of the copy. Click Duplicate . Click the name to go to the details Click the trash can icon 🗑\ufe0f of the "WebAuthn Browser Browser - Conditional OTP" and click Delete . If you require WebAuthn for all users: Click + menu of the WebAuthn Browser Forms . Click Add step . Click WebAuthn Authenticator . Click Add . Select Required for the WebAuthn Authenticator authentication type to set its requirement to required. Click the Action menu at the top of the screen. Select Bind flow from the drop-down list. Select Browser from the drop-down list. Click Save . Note If a user does not have WebAuthn credentials, the user must register WebAuthn credentials. Users can log in with WebAuthn if they have a WebAuthn credential registered only. So instead of adding the WebAuthn Authenticator execution, you can: Procedure Click + menu of the WebAuthn Browser Forms row. Click Add sub-flow . Enter "Conditional 2FA" for the name field. Select Conditional for the Conditional 2FA to set its requirement to conditional. On the Conditional 2FA row, click the plus sign + and select Add condition . Click Add condition . Select Condition - User Configured . Click Add . Select Required for the Condition - User Configured to set its requirement to required. Drag and drop WebAuthn Authenticator into the Conditional 2FA flow Select Alternative for the WebAuthn Authenticator to set its requirement to alternative. The user can choose between using WebAuthn and OTP for the second factor: Procedure On the Conditional 2FA row, click the plus sign + and select Add step . Select OTP Form from the list. Click Add . Select Alternative for the OTP Form to set its requirement to alternative. 8.7.3. Authenticate with WebAuthn authenticator After registering a WebAuthn authenticator, the user carries out the following operations: Open the login form. The user must authenticate with a username and password. The user's browser asks the user to authenticate by using their WebAuthn authenticator. 8.7.4. Managing WebAuthn as an administrator 8.7.4.1. Managing credentials Red Hat build of Keycloak manages WebAuthn credentials similarly to other credentials from User credential management : Red Hat build of Keycloak assigns users a required action to create a WebAuthn credential from the Reset Actions list and select Webauthn Register . Administrators can delete a WebAuthn credential by clicking Delete . Administrators can view the credential's data, such as the AAGUID, by selecting Show data... . Administrators can set a label for the credential by setting a value in the User Label field and saving the data. 8.7.4.2. Managing policy Administrators can configure WebAuthn related operations as WebAuthn Policy per realm. Procedure Click Authentication in the menu. Click the Policy tab. Click the WebAuthn Policy tab. Configure the items within the policy (see description below). Click Save . The configurable items and their description are as follows: Configuration Description Relying Party Entity Name The readable server name as a WebAuthn Relying Party. This item is mandatory and applies to the registration of the WebAuthn authenticator. The default setting is "keycloak". For more details, see WebAuthn Specification . Signature Algorithms The algorithms telling the WebAuthn authenticator which signature algorithms to use for the Public Key Credential . Red Hat build of Keycloak uses the Public Key Credential to sign and verify Authentication Assertions . If no algorithms exist, the default ES256 is adapted. ES256 is an optional configuration item applying to the registration of WebAuthn authenticators. For more details, see WebAuthn Specification . Relying Party ID The ID of a WebAuthn Relying Party that determines the scope of Public Key Credentials . The ID must be the origin's effective domain. This ID is an optional configuration item applied to the registration of WebAuthn authenticators. If this entry is blank, Red Hat build of Keycloak adapts the host part of Red Hat build of Keycloak's base URL. For more details, see WebAuthn Specification . Attestation Conveyance Preference The WebAuthn API implementation on the browser ( WebAuthn Client ) is the preferential method to generate Attestation statements. This preference is an optional configuration item applying to the registration of the WebAuthn authenticator. If no option exists, its behavior is the same as selecting "none". For more details, see WebAuthn Specification . Authenticator Attachment The acceptable attachment pattern of a WebAuthn authenticator for the WebAuthn Client. This pattern is an optional configuration item applying to the registration of the WebAuthn authenticator. For more details, see WebAuthn Specification . Require Discoverable Credential The option requiring that the WebAuthn authenticator generates the Public Key Credential as Client-side discoverable Credential . This option applies to the registration of the WebAuthn authenticator. If left blank, its behavior is the same as selecting "No". For more details, see WebAuthn Specification . User Verification Requirement The option requiring that the WebAuthn authenticator confirms the verification of a user. This is an optional configuration item applying to the registration of a WebAuthn authenticator and the authentication of a user by a WebAuthn authenticator. If no option exists, its behavior is the same as selecting "preferred". For more details, see WebAuthn Specification for registering a WebAuthn authenticator and WebAuthn Specification for authenticating the user by a WebAuthn authenticator . Timeout The timeout value, in seconds, for registering a WebAuthn authenticator and authenticating the user by using a WebAuthn authenticator. If set to zero, its behavior depends on the WebAuthn authenticator's implementation. The default value is 0. For more details, see WebAuthn Specification for registering a WebAuthn authenticator and WebAuthn Specification for authenticating the user by a WebAuthn authenticator . Avoid Same Authenticator Registration If enabled, Red Hat build of Keycloak cannot re-register an already registered WebAuthn authenticator. Acceptable AAGUIDs The white list of AAGUIDs which a WebAuthn authenticator must register against. 8.7.5. Attestation statement verification When registering a WebAuthn authenticator, Red Hat build of Keycloak verifies the trustworthiness of the attestation statement generated by the WebAuthn authenticator. Red Hat build of Keycloak requires the trust anchor's certificates imported into the truststore . To omit this validation, disable this truststore or set the WebAuthn policy's configuration item "Attestation Conveyance Preference" to "none". 8.7.6. Managing WebAuthn credentials as a user 8.7.6.1. Register WebAuthn authenticator The appropriate method to register a WebAuthn authenticator depends on whether the user has already registered an account on Red Hat build of Keycloak. 8.7.6.2. New user If the WebAuthn Register required action is Default Action in a realm, new users must set up the Passkey after their first login. Procedure Open the login form. Click Register . Fill in the items on the form. Click Register . After successfully registering, the browser asks the user to enter the text of their WebAuthn authenticator's label. 8.7.6.3. Existing user If WebAuthn Authenticator is set up as required as shown in the first example, then when existing users try to log in, they are required to register their WebAuthn authenticator automatically: Procedure Open the login form. Enter the items on the form. Click Save . Click Login . After successful registration, the user's browser asks the user to enter the text of their WebAuthn authenticator's label. 8.7.7. Passwordless WebAuthn together with Two-Factor Red Hat build of Keycloak uses WebAuthn for two-factor authentication, but you can use WebAuthn as the first-factor authentication. In this case, users with passwordless WebAuthn credentials can authenticate to Red Hat build of Keycloak without a password. Red Hat build of Keycloak can use WebAuthn as both the passwordless and two-factor authentication mechanism in the context of a realm and a single authentication flow. An administrator typically requires that Passkeys registered by users for the WebAuthn passwordless authentication meet different requirements. For example, the Passkeys may require users to authenticate to the Passkey using a PIN, or the Passkey attests with a stronger certificate authority. Because of this, Red Hat build of Keycloak permits administrators to configure a separate WebAuthn Passwordless Policy . There is a required Webauthn Register Passwordless action of type and separate authenticator of type WebAuthn Passwordless Authenticator . 8.7.7.1. Setup Set up WebAuthn passwordless support as follows: (if not already present) Register a new required action for WebAuthn passwordless support. Use the steps described in Enable WebAuthn Authenticator Registration . Register the Webauthn Register Passwordless action. Configure the policy. You can use the steps and configuration options described in Managing Policy . Perform the configuration in the Admin Console in the tab WebAuthn Passwordless Policy . Typically the requirements for the Passkey will be stronger than for the two-factor policy. For example, you can set the User Verification Requirement to Required when you configure the passwordless policy. Configure the authentication flow. Use the WebAuthn Browser flow described in Adding WebAuthn Authentication to a Browser Flow . Configure the flow as follows: The WebAuthn Browser Forms subflow contains Username Form as the first authenticator. Delete the default Username Password Form authenticator and add the Username Form authenticator. This action requires the user to provide a username as the first step. There will be a required subflow, which can be named Passwordless Or Two-factor , for example. This subflow indicates the user can authenticate with Passwordless WebAuthn credential or with Two-factor authentication. The flow contains WebAuthn Passwordless Authenticator as the first alternative. The second alternative will be a subflow named Password And Two-factor Webauthn , for example. This subflow contains a Password Form and a WebAuthn Authenticator . The final configuration of the flow looks similar to this: PasswordLess flow You can now add WebAuthn Register Passwordless as the required action to a user, already known to Red Hat build of Keycloak, to test this. During the first authentication, the user must use the password and second-factor WebAuthn credential. The user does not need to provide the password and second-factor WebAuthn credential if they use the WebAuthn Passwordless credential. 8.7.8. LoginLess WebAuthn Red Hat build of Keycloak uses WebAuthn for two-factor authentication, but you can use WebAuthn as the first-factor authentication. In this case, users with passwordless WebAuthn credentials can authenticate to Red Hat build of Keycloak without submitting a login or a password. Red Hat build of Keycloak can use WebAuthn as both the loginless/passwordless and two-factor authentication mechanism in the context of a realm. An administrator typically requires that Passkeys registered by users for the WebAuthn loginless authentication meet different requirements. Loginless authentication requires users to authenticate to the Passkey (for example by using a PIN code or a fingerprint) and that the cryptographic keys associated with the loginless credential are stored physically on the Passkey. Not all Passkeys meet that kind of requirement. Check with your Passkey vendor if your device supports 'user verification' and 'discoverable credential'. See Supported Passkeys . Red Hat build of Keycloak permits administrators to configure the WebAuthn Passwordless Policy in a way that allows loginless authentication. Note that loginless authentication can only be configured with WebAuthn Passwordless Policy and with WebAuthn Passwordless credentials. WebAuthn loginless authentication and WebAuthn passwordless authentication can be configured on the same realm but will share the same policy WebAuthn Passwordless Policy . 8.7.8.1. Setup Procedure Set up WebAuthn Loginless support as follows: (if not already present) Register a new required action for WebAuthn passwordless support. Use the steps described in Enable WebAuthn Authenticator Registration . Register the Webauthn Register Passwordless action. Configure the WebAuthn Passwordless Policy . Perform the configuration in the Admin Console, Authentication section, in the tab Policies WebAuthn Passwordless Policy . You have to set User Verification Requirement to required and Require Discoverable Credential to Yes when you configure the policy for loginless scenario. Note that since there isn't a dedicated Loginless policy it won't be possible to mix authentication scenarios with user verification=no/discoverable credential=no and loginless scenarios (user verification=yes/discoverable credential=yes). Storage capacity is usually very limited on Passkeys meaning that you won't be able to store many discoverable credentials on your Passkey. Configure the authentication flow. Create a new authentication flow, add the "WebAuthn Passwordless" execution and set the Requirement setting of the execution to Required The final configuration of the flow looks similar to this: LoginLess flow You can now add the required action WebAuthn Register Passwordless to a user, already known to Red Hat build of Keycloak, to test this. The user with the required action configured will have to authenticate (with a username/password for example) and will then be prompted to register a Passkey to be used for loginless authentication. 8.7.8.2. Vendor specific remarks 8.7.8.2.1. Compatibility check list Loginless authentication with Red Hat build of Keycloak requires the Passkey to meet the following features FIDO2 compliance: not to be confused with FIDO/U2F User verification: the ability for the Passkey to authenticate the user (prevents someone finding your Passkey to be able to authenticate loginless and passwordless) Discoverable Credential: the ability for the Passkey to store the login and the cryptographic keys associated with the client application 8.7.8.2.2. Windows Hello To use Windows Hello based credentials to authenticate against Red Hat build of Keycloak, configure the Signature Algorithms setting of the WebAuthn Passwordless Policy to include the RS256 value. Note that some browsers don't allow access to platform Passkey (like Windows Hello) inside private windows. 8.7.8.2.3. Supported Passkeys The following Passkeys have been successfully tested for loginless authentication with Red Hat build of Keycloak: Windows Hello (Windows 10 21H1/21H2) Yubico Yubikey 5 NFC Feitian ePass FIDO-NFC 8.8. Recovery Codes (RecoveryCodes) You can configure Recovery codes for two-factor authentication by adding 'Recovery Authentication Code Form' as a two-factor authenticator to your authentication flow. For an example of configuring this authenticator, see WebAuthn . Note RecoveryCodes is Technology Preview and is not fully supported. This feature is disabled by default. To enable start the server with --features=preview or --features=recovery-codes 8.9. Conditions in conditional flows As was mentioned in Execution requirements , Condition executions can be only contained in Conditional subflow. If all Condition executions evaluate as true, then the Conditional sub-flow acts as Required . You can process the execution in the Conditional sub-flow. If some executions included in the Conditional sub-flow evaluate as false, then the whole sub-flow is considered as Disabled . 8.9.1. Available conditions Condition - User Role This execution has the ability to determine if the user has a role defined by User role field. If the user has the required role, the execution is considered as true and other executions are evaluated. The administrator has to define the following fields: Alias Describes a name of the execution, which will be shown in the authentication flow. User role Role the user should have to execute this flow. To specify an application role the syntax is appname.approle (for example myapp.myrole ). Condition - User Configured This checks if the other executions in the flow are configured for the user. The Execution requirements section includes an example of the OTP form. Condition - User Attribute This checks if the user has set up the required attribute: optionally, the check can also evaluate the group attributes. There is a possibility to negate output, which means the user should not have the attribute. The User Attributes section shows how to add a custom attribute. You can provide these fields: Alias Describes a name of the execution, which will be shown in the authentication flow. Attribute name Name of the attribute to check. Expected attribute value Expected value in the attribute. Include group attributes If On, the condition checks if any of the joined group has one attribute matching the configured name and value: this option can affect performance Negate output You can negate the output. In other words, the attribute should not be present. 8.9.2. Explicitly deny/allow access in conditional flows You can allow or deny access to resources in a conditional flow. The two authenticators Deny Access and Allow Access control access to the resources by conditions. Allow Access Authenticator will always successfully authenticate. This authenticator is not configurable. Deny Access Access will always be denied. You can define an error message, which will be shown to the user. You can provide these fields: Alias Describes a name of the execution, which will be shown in the authentication flow. Error message Error message which will be shown to the user. The error message could be provided as a particular message or as a property in order to use it with localization. (i.e. " You do not have the role 'admin'. ", my-property-deny in messages properties) Leave blank for the default message defined as property access-denied . Here is an example how to deny access to all users who do not have the role role1 and show an error message defined by a property deny-role1 . This example includes Condition - User Role and Deny Access executions. Browser flow Condition - user role configuration Configuration of the Deny Access is really easy. You can specify an arbitrary Alias and required message like this: The last thing is defining the property with an error message in the login theme messages_en.properties (for English): 8.10. Passkeys Red Hat build of Keycloak provides preview support for Passkeys . Red Hat build of Keycloak works as a Passkeys Relying Party (RP). Passkey registration and authentication are realized by the features of WebAuthn . Therefore, users of Red Hat build of Keycloak can do Passkey registration and authentication by existing WebAuthn registration and authentication . Note Both synced Passkeys and device-bound Passkeys can be used for both Same-Device and Cross-Device Authentication (CDA). However, Passkeys operations success depends on the user's environment. Make sure which operations can succeed in the environment . | [
"https://{DOMAIN}/realms/{REALMNAME}/protocol/openid-connect/auth?client_id={CLIENT-ID}&redirect_uri={REDIRECT-URI}&scope=openid&response_type=code&response_mode=query&nonce=exg16fxdjcu&claims=%7B%22id_token%22%3A%7B%22acr%22%3A%7B%22essential%22%3Atrue%2C%22values%22%3A%5B%22gold%22%5D%7D%7D%7D",
"claims= { \"id_token\": { \"acr\": { \"essential\": true, \"values\": [\"gold\"] } } }",
"sudo kadmin.local",
"addprinc -randkey HTTP/[email protected] ktadd -k /tmp/http.keytab HTTP/[email protected]",
"[domain_realm] .mydomain.org = MYDOMAIN.ORG mydomain.org = MYDOMAIN.ORG",
"// Obtain accessToken in your application. KeycloakPrincipal keycloakPrincipal = (KeycloakPrincipal) servletReq.getUserPrincipal(); AccessToken accessToken = keycloakPrincipal.getKeycloakSecurityContext().getToken(); // Retrieve Kerberos credential from accessToken and deserialize it String serializedGssCredential = (String) accessToken.getOtherClaims(). get(org.keycloak.common.constants.KerberosConstants.GSS_DELEGATION_CREDENTIAL); GSSCredential deserializedGssCredential = org.keycloak.common.util.KerberosSerializationUtils. deserializeCredential(serializedGssCredential); // Create GSSContext to call other Kerberos-secured services GSSContext context = gssManager.createContext(serviceName, krb5Oid, deserializedGssCredential, GSSContext.DEFAULT_LIFETIME);",
"emailAddress=(.*?)(?:,|USD)",
"deny-role1 = You do not have required role!"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_administration_guide/configuring-authentication_server_administration_guide |
Chapter 14. Setting up client access to a Kafka cluster | Chapter 14. Setting up client access to a Kafka cluster After you have deployed Streams for Apache Kafka , you can set up client access to your Kafka cluster. To verify the deployment, you can deploy example producer and consumer clients. Otherwise, create listeners that provide client access within or outside the OpenShift cluster. 14.1. Deploying example clients Deploy example producer and consumer clients to send and receive messages. You can use these clients to verify a deployment of Streams for Apache Kafka. Prerequisites The Kafka cluster is available for the clients. Procedure Deploy a Kafka producer. oc run kafka-producer -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server cluster-name -kafka-bootstrap:9092 --topic my-topic Type a message into the console where the producer is running. Press Enter to send the message. Deploy a Kafka consumer. oc run kafka-consumer -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server cluster-name -kafka-bootstrap:9092 --topic my-topic --from-beginning Confirm that you see the incoming messages in the consumer console. 14.2. Configuring listeners to connect to Kafka brokers Use listeners for client connection to Kafka brokers. Streams for Apache Kafka provides a generic GenericKafkaListener schema with properties to configure listeners through the Kafka resource. The GenericKafkaListener provides a flexible approach to listener configuration. You can specify properties to configure internal listeners for connecting within the OpenShift cluster or external listeners for connecting outside the OpenShift cluster. Specify a connection type to expose Kafka in the listener configuration. The type chosen depends on your requirements, and your environment and infrastructure. The following listener types are supported: Internal listeners internal to connect within the same OpenShift cluster cluster-ip to expose Kafka using per-broker ClusterIP services External listeners nodeport to use ports on OpenShift nodes loadbalancer to use loadbalancer services ingress to use Kubernetes Ingress and the Ingress NGINX Controller for Kubernetes (Kubernetes only) route to use OpenShift Route and the default HAProxy router (OpenShift only) Important Do not use ingress on OpenShift, use the route type instead. The Ingress NGINX Controller is only intended for use on Kubernetes. The route type is only supported on OpenShift. An internal type listener configuration uses a headless service and the DNS names given to the broker pods. You might want to join your OpenShift network to an outside network. In which case, you can configure an internal type listener (using the useServiceDnsDomain property) so that the OpenShift service DNS domain (typically .cluster.local ) is not used. You can also configure a cluster-ip type of listener that exposes a Kafka cluster based on per-broker ClusterIP services. This is a useful option when you can't route through the headless service or you wish to incorporate a custom access mechanism. For example, you might use this listener when building your own type of external listener for a specific Ingress controller or the OpenShift Gateway API. External listeners handle access to a Kafka cluster from networks that require different authentication mechanisms. You can configure external listeners for client access outside an OpenShift environment using a specified connection mechanism, such as a loadbalancer or route. For example, loadbalancers might not be suitable for certain infrastructure, such as bare metal, where node ports provide a better option. Each listener is defined as an array in the Kafka resource. Example listener configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... listeners: - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-certificate.crt key: my-key.key # ... You can configure as many listeners as required, as long as their names and ports are unique. You can also configure listeners for secure connection using authentication. If you want to know more about the pros and cons of each connection type, refer to Accessing Apache Kafka in Strimzi . Note If you scale your Kafka cluster while using external listeners, it might trigger a rolling update of all Kafka brokers. This depends on the configuration. Additional resources GenericKafkaListener schema reference 14.3. Listener naming conventions From the listener configuration, the resulting listener bootstrap and per-broker service names are structured according to the following naming conventions: Table 14.1. Listener naming conventions Listener type Bootstrap service name Per-Broker service name internal <cluster_name>-kafka-bootstrap Not applicable loadbalancer nodeport ingress route cluster-ip <cluster_name>-kafka-<listener-name>-bootstrap <cluster_name>-kafka-<listener-name>-<idx> For example, my-cluster-kafka-bootstrap , my-cluster-kafka-external1-bootstrap , and my-cluster-kafka-external1-0 . The names are assigned to the services, routes, load balancers, and ingresses created through the listener configuration. You can use certain backwards compatible names and port numbers to transition listeners initially configured under the retired KafkaListeners schema. The resulting external listener naming convention varies slightly. These specific combinations of listener name and port configuration values are backwards compatible: Table 14.2. Backwards compatible listener name and port combinations Listener name Port Bootstrap service name Per-Broker service name plain 9092 <cluster_name>-kafka-bootstrap Not applicable tls 9093 <cluster-name>-kafka-bootstrap Not applicable external 9094 <cluster_name>-kafka-bootstrap <cluster_name>-kafka-bootstrap-<idx> 14.4. Setting up client access to a Kafka cluster using listeners Using the address of the Kafka cluster, you can provide access to a client in the same OpenShift cluster; or provide external access to a client on a different OpenShift namespace or outside OpenShift entirely. This procedure shows how to configure client access to a Kafka cluster from outside OpenShift or from another OpenShift cluster. A Kafka listener provides access to the Kafka cluster. Client access is secured using the following configuration: An external listener is configured for the Kafka cluster, with TLS encryption and mTLS authentication, and Kafka simple authorization enabled. A KafkaUser is created for the client, with mTLS authentication, and Access Control Lists (ACLs) defined for simple authorization. You can configure your listener to use mutual tls , scram-sha-512 , or oauth authentication. mTLS always uses encryption, but encryption is also recommended when using SCRAM-SHA-512 and OAuth 2.0 authentication. You can configure simple , oauth , opa , or custom authorization for Kafka brokers. When enabled, authorization is applied to all enabled listeners. When you configure the KafkaUser authentication and authorization mechanisms, ensure they match the equivalent Kafka configuration: KafkaUser.spec.authentication matches Kafka.spec.kafka.listeners[*].authentication KafkaUser.spec.authorization matches Kafka.spec.kafka.authorization You should have at least one listener supporting the authentication you want to use for the KafkaUser . Note Authentication between Kafka users and Kafka brokers depends on the authentication settings for each. For example, it is not possible to authenticate a user with mTLS if it is not also enabled in the Kafka configuration. Streams for Apache Kafka operators automate the configuration process and create the certificates required for authentication: The Cluster Operator creates the listeners and sets up the cluster and client certificate authority (CA) certificates to enable authentication with the Kafka cluster. The User Operator creates the user representing the client and the security credentials used for client authentication, based on the chosen authentication type. You add the certificates to your client configuration. In this procedure, the CA certificates generated by the Cluster Operator are used, but you can replace them by installing your own certificates . You can also configure your listener to use a Kafka listener certificate managed by an external CA (certificate authority) . Certificates are available in PEM (.crt) and PKCS #12 (.p12) formats. This procedure uses PEM certificates. Use PEM certificates with clients that use certificates in X.509 format. Note For internal clients in the same OpenShift cluster and namespace, you can mount the cluster CA certificate in the pod specification. For more information, see Configuring internal clients to trust the cluster CA . Prerequisites The Kafka cluster is available for connection by a client running outside the OpenShift cluster The Cluster Operator and User Operator are running in the cluster Procedure Configure the Kafka cluster with a Kafka listener. Define the authentication required to access the Kafka broker through the listener. Enable authorization on the Kafka broker. Example listener configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... listeners: 1 - name: external1 2 port: 9094 3 type: <listener_type> 4 tls: true 5 authentication: type: tls 6 configuration: 7 #... authorization: 8 type: simple superUsers: - super-user-name 9 # ... 1 Configuration options for enabling external listeners are described in the Generic Kafka listener schema reference . 2 Name to identify the listener. Must be unique within the Kafka cluster. 3 Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients. 4 External listener type specified as route (OpenShift only), loadbalancer , nodeport or ingress (Kubernetes only). An internal listener is specified as internal or cluster-ip . 5 Required. TLS encryption on the listener. For route and ingress type listeners it must be set to true . For mTLS authentication, also use the authentication property. 6 Client authentication mechanism on the listener. For server and client authentication using mTLS, you specify tls: true and authentication.type: tls . 7 (Optional) Depending on the requirements of the listener type, you can specify additional listener configuration . 8 Authorization specified as simple , which uses the AclAuthorizer and StandardAuthorizer Kafka plugins. 9 (Optional) Super users can access all brokers regardless of any access restrictions defined in ACLs. Warning An OpenShift route address comprises the Kafka cluster name, the listener name, the project name, and the domain of the router. For example, my-cluster-kafka-external1-bootstrap-my-project.domain.com (<cluster_name>-kafka-<listener_name>-bootstrap-<namespace>.<domain>). Each DNS label (between periods ".") must not exceed 63 characters, and the total length of the address must not exceed 255 characters. Create or update the Kafka resource. oc apply -f <kafka_configuration_file> The Kafka cluster is configured with a Kafka broker listener using mTLS authentication. A service is created for each Kafka broker pod. A service is created to serve as the bootstrap address for connection to the Kafka cluster. A service is also created as the external bootstrap address for external connection to the Kafka cluster using nodeport listeners. The cluster CA certificate to verify the identity of the kafka brokers is also created in the secret <cluster_name> -cluster-ca-cert . Note If you scale your Kafka cluster while using external listeners, it might trigger a rolling update of all Kafka brokers. This depends on the configuration. Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the Kafka resource. oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==" <listener_name> ")].bootstrapServers}{"\n"}' For example: oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external")].bootstrapServers}{"\n"}' Use the bootstrap address in your Kafka client to connect to the Kafka cluster. Create or modify a user representing the client that requires access to the Kafka cluster. Specify the same authentication type as the Kafka listener. Specify the authorization ACLs for simple authorization. Example user configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster 1 spec: authentication: type: tls 2 authorization: type: simple acls: 3 - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read - resource: type: group name: my-group patternType: literal operations: - Read 1 The label must match the label of the Kafka cluster. 2 Authentication specified as mutual tls . 3 Simple authorization requires an accompanying list of ACL rules to apply to the user. The rules define the operations allowed on Kafka resources based on the username ( my-user ). Create or modify the KafkaUser resource. oc apply -f USER-CONFIG-FILE The user is created, as well as a secret with the same name as the KafkaUser resource. The secret contains a public and private key for mTLS authentication. Example secret apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store Extract the cluster CA certificate from the <cluster_name> -cluster-ca-cert secret of the Kafka cluster. oc get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Extract the user CA certificate from the <user_name> secret. oc get secret <user_name> -o jsonpath='{.data.user\.crt}' | base64 -d > user.crt Extract the private key of the user from the <user_name> secret. oc get secret <user_name> -o jsonpath='{.data.user\.key}' | base64 -d > user.key Configure your client with the bootstrap address hostname and port for connecting to the Kafka cluster: props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, " <hostname>:<port> "); Configure your client with the truststore credentials to verify the identity of the Kafka cluster. Specify the public cluster CA certificate. Example truststore configuration props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL"); props.put(SslConfigs.SSL_TRUSTSTORE_TYPE_CONFIG, "PEM"); props.put(SslConfigs.SSL_TRUSTSTORE_CERTIFICATES_CONFIG, " <ca.crt_file_content> "); SSL is the specified security protocol for mTLS authentication. Specify SASL_SSL for SCRAM-SHA-512 authentication over TLS. PEM is the file format of the truststore. Configure your client with the keystore credentials to verify the user when connecting to the Kafka cluster. Specify the public certificate and private key. Example keystore configuration props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL"); props.put(SslConfigs.SSL_KEYSTORE_TYPE_CONFIG, "PEM"); props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, " <user.crt_file_content> "); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, " <user.key_file_content> "); Add the keystore certificate and the private key directly to the configuration. Add as a single-line format. Between the BEGIN CERTIFICATE and END CERTIFICATE delimiters, start with a newline character ( \n ). End each line from the original certificate with \n too. Example keystore configuration props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, "-----BEGIN CERTIFICATE----- \n <user_certificate_content_line_1> \n <user_certificate_content_line_n> \n-----END CERTIFICATE---"); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, "----BEGIN PRIVATE KEY-----\n <user_key_content_line_1> \n <user_key_content_line_n> \n-----END PRIVATE KEY-----"); Additional resources Section 15.1.1, "Listener authentication" Section 15.1.2, "Kafka authorization" If you are using an authorization server, you can use token-based authentication and authorization: Section 15.4, "Using OAuth 2.0 token-based authentication" Section 15.5, "Using OAuth 2.0 token-based authorization" 14.5. Accessing Kafka using node ports Use node ports to access a Streams for Apache Kafka cluster from an external client outside the OpenShift cluster. To connect to a broker, you specify a hostname and port number for the Kafka bootstrap address, as well as the certificate used for TLS encryption. The procedure shows basic nodeport listener configuration. You can use listener properties to enable TLS encryption ( tls ) and specify a client authentication mechanism ( authentication ). Add additional configuration using configuration properties. For example, you can use the following configuration properties with nodeport listeners: preferredNodePortAddressType Specifies the first address type that's checked as the node address. externalTrafficPolicy Specifies whether the service routes external traffic to node-local or cluster-wide endpoints. nodePort Overrides the assigned node port numbers for the bootstrap and broker services. For more information on listener configuration, see the GenericKafkaListener schema reference . Prerequisites A running Cluster Operator In this procedure, the Kafka cluster name is my-cluster . The name of the listener is external4 . Procedure Configure a Kafka resource with an external listener set to the nodeport type. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: external4 port: 9094 type: nodeport tls: true authentication: type: tls # ... # ... zookeeper: # ... Create or update the resource. oc apply -f <kafka_configuration_file> A cluster CA certificate to verify the identity of the kafka brokers is created in the secret my-cluster-cluster-ca-cert . NodePort type services are created for each Kafka broker, as well as an external bootstrap service. Node port services created for the bootstrap and brokers NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external4-0 NodePort 172.30.55.13 9094:31789/TCP my-cluster-kafka-external4-1 NodePort 172.30.250.248 9094:30028/TCP my-cluster-kafka-external4-2 NodePort 172.30.115.81 9094:32650/TCP my-cluster-kafka-external4-bootstrap NodePort 172.30.30.23 9094:32650/TCP The bootstrap address used for client connection is propagated to the status of the Kafka resource. Example status for the bootstrap address status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready kafkaVersion: 3.7.0 listeners: # ... - addresses: - host: ip-10-0-224-199.us-west-2.compute.internal port: 32650 bootstrapServers: 'ip-10-0-224-199.us-west-2.compute.internal:32650' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external4 observedGeneration: 2 operatorLastSuccessfulVersion: 2.7 # ... Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the Kafka resource. oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external4")].bootstrapServers}{"\n"}' ip-10-0-224-199.us-west-2.compute.internal:32650 Extract the cluster CA certificate. oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Configure your client to connect to the brokers. Specify the bootstrap host and port in your Kafka client as the bootstrap address to connect to the Kafka cluster. For example, ip-10-0-224-199.us-west-2.compute.internal:32650 . Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection. If you enabled a client authentication mechanism, you will also need to configure it in your client. Note If you are using your own listener certificates, check whether you need to add the CA certificate to the client's truststore configuration. If it is a public (external) CA, you usually won't need to add it. 14.6. Accessing Kafka using loadbalancers Use loadbalancers to access a Streams for Apache Kafka cluster from an external client outside the OpenShift cluster. To connect to a broker, you specify a hostname and port number for the Kafka bootstrap address, as well as the certificate used for TLS encryption. The procedure shows basic loadbalancer listener configuration. You can use listener properties to enable TLS encryption ( tls ) and specify a client authentication mechanism ( authentication ). Add additional configuration using configuration properties. For example, you can use the following configuration properties with loadbalancer listeners: loadBalancerSourceRanges Restricts traffic to a specified list of CIDR (Classless Inter-Domain Routing) ranges. externalTrafficPolicy Specifies whether the service routes external traffic to node-local or cluster-wide endpoints. loadBalancerIP Requests a specific IP address when creating a loadbalancer. For more information on listener configuration, see the GenericKafkaListener schema reference . Prerequisites A running Cluster Operator In this procedure, the Kafka cluster name is my-cluster . The name of the listener is external3 . Procedure Configure a Kafka resource with an external listener set to the loadbalancer type. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls # ... # ... zookeeper: # ... Create or update the resource. oc apply -f <kafka_configuration_file> A cluster CA certificate to verify the identity of the kafka brokers is also created in the secret my-cluster-cluster-ca-cert . loadbalancer type services and loadbalancers are created for each Kafka broker, as well as an external bootstrap service. Loadbalancer services and loadbalancers created for the bootstraps and brokers NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external3-0 LoadBalancer 172.30.204.234 9094:30011/TCP my-cluster-kafka-external3-1 LoadBalancer 172.30.164.89 9094:32544/TCP my-cluster-kafka-external3-2 LoadBalancer 172.30.73.151 9094:32504/TCP my-cluster-kafka-external3-bootstrap LoadBalancer 172.30.30.228 9094:30371/TCP NAME EXTERNAL-IP (loadbalancer) my-cluster-kafka-external3-0 a8a519e464b924000b6c0f0a05e19f0d-1132975133.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-1 ab6adc22b556343afb0db5ea05d07347-611832211.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-2 a9173e8ccb1914778aeb17eca98713c0-777597560.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-bootstrap a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com The bootstrap address used for client connection is propagated to the status of the Kafka resource. Example status for the bootstrap address status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready kafkaVersion: 3.7.0 listeners: # ... - addresses: - host: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com port: 9094 bootstrapServers: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external3 observedGeneration: 2 operatorLastSuccessfulVersion: 2.7 # ... The DNS addresses used for client connection are propagated to the status of each loadbalancer service. Example status for the bootstrap loadbalancer status: loadBalancer: ingress: - hostname: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com # ... Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the Kafka resource. oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external3")].bootstrapServers}{"\n"}' a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094 Extract the cluster CA certificate. oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Configure your client to connect to the brokers. Specify the bootstrap host and port in your Kafka client as the bootstrap address to connect to the Kafka cluster. For example, a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094 . Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection. If you enabled a client authentication mechanism, you will also need to configure it in your client. Note If you are using your own listener certificates, check whether you need to add the CA certificate to the client's truststore configuration. If it is a public (external) CA, you usually won't need to add it. 14.7. Accessing Kafka using OpenShift routes Use OpenShift routes to access a Streams for Apache Kafka cluster from clients outside the OpenShift cluster. To be able to use routes, add configuration for a route type listener in the Kafka custom resource. When applied, the configuration creates a dedicated route and service for an external bootstrap and each broker in the cluster. Clients connect to the bootstrap route, which routes them through the bootstrap service to connect to a broker. Per-broker connections are then established using DNS names, which route traffic from the client to the broker through the broker-specific routes and services. To connect to a broker, you specify a hostname for the route bootstrap address, as well as the certificate used for TLS encryption. For access using routes, the port is always 443. Warning An OpenShift route address comprises the Kafka cluster name, the listener name, the project name, and the domain of the router. For example, my-cluster-kafka-external1-bootstrap-my-project.domain.com (<cluster_name>-kafka-<listener_name>-bootstrap-<namespace>.<domain>). Each DNS label (between periods ".") must not exceed 63 characters, and the total length of the address must not exceed 255 characters. The procedure shows basic listener configuration. TLS encryption ( tls ) must be enabled. You can also specify a client authentication mechanism ( authentication ). Add additional configuration using configuration properties. For example, you can use the host configuration property with route listeners to specify the hostnames used by the bootstrap and per-broker services. For more information on listener configuration, see the GenericKafkaListener schema reference . TLS passthrough TLS passthrough is enabled for routes created by Streams for Apache Kafka. Kafka uses a binary protocol over TCP, but routes are designed to work with a HTTP protocol. To be able to route TCP traffic through routes, Streams for Apache Kafka uses TLS passthrough with Server Name Indication (SNI). SNI helps with identifying and passing connection to Kafka brokers. In passthrough mode, TLS encryption is always used. Because the connection passes to the brokers, the listeners use TLS certificates signed by the internal cluster CA and not the ingress certificates. To configure listeners to use your own listener certificates, use the brokerCertChainAndKey property . Prerequisites A running Cluster Operator In this procedure, the Kafka cluster name is my-cluster . The name of the listener is external1 . Procedure Configure a Kafka resource with an external listener set to the route type. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: external1 port: 9094 type: route tls: true 1 authentication: type: tls # ... # ... zookeeper: # ... 1 For route type listeners, TLS encryption must be enabled ( true ). Create or update the resource. oc apply -f <kafka_configuration_file> A cluster CA certificate to verify the identity of the kafka brokers is created in the secret my-cluster-cluster-ca-cert . ClusterIP type services are created for each Kafka broker, as well as an external bootstrap service. A route is also created for each service, with a DNS address (host/port) to expose them using the default OpenShift HAProxy router. The routes are preconfigured with TLS passthrough. Routes created for the bootstraps and brokers NAME HOST/PORT SERVICES PORT TERMINATION my-cluster-kafka-external1-0 my-cluster-kafka-external1-0-my-project.router.com my-cluster-kafka-external1-0 9094 passthrough my-cluster-kafka-external1-1 my-cluster-kafka-external1-1-my-project.router.com my-cluster-kafka-external1-1 9094 passthrough my-cluster-kafka-external1-2 my-cluster-kafka-external1-2-my-project.router.com my-cluster-kafka-external1-2 9094 passthrough my-cluster-kafka-external1-bootstrap my-cluster-kafka-external1-bootstrap-my-project.router.com my-cluster-kafka-external1-bootstrap 9094 passthrough The DNS addresses used for client connection are propagated to the status of each route. Example status for the bootstrap route status: ingress: - host: >- my-cluster-kafka-external1-bootstrap-my-project.router.com # ... Use a target broker to check the client-server TLS connection on port 443 using the OpenSSL s_client . openssl s_client -connect my-cluster-kafka-external1-0-my-project.router.com:443 -servername my-cluster-kafka-external1-0-my-project.router.com -showcerts The server name is the Server Name Indication (SNI) for passing the connection to the broker. If the connection is successful, the certificates for the broker are returned. Certificates for the broker Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0 Retrieve the address of the bootstrap service from the status of the Kafka resource. oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external1")].bootstrapServers}{"\n"}' my-cluster-kafka-external1-bootstrap-my-project.router.com:443 The address comprises the Kafka cluster name, the listener name, the project name and the domain of the router ( router.com in this example). Extract the cluster CA certificate. oc get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Configure your client to connect to the brokers. Specify the address for the bootstrap service and port 443 in your Kafka client as the bootstrap address to connect to the Kafka cluster. Add the extracted certificate to the truststore of your Kafka client to configure a TLS connection. If you enabled a client authentication mechanism, you will also need to configure it in your client. Note If you are using your own listener certificates, check whether you need to add the CA certificate to the client's truststore configuration. If it is a public (external) CA, you usually won't need to add it. 14.8. Returning connection details for services Service discovery makes it easier for client applications running in the same OpenShift cluster as Streams for Apache Kafka to interact with a Kafka cluster. A service discovery label and annotation are generated for services used to access the Kafka cluster: Internal Kafka bootstrap service Kafka Bridge service The label helps to make the service discoverable, while the annotation provides connection details for client applications to establish connections. The service discovery label, strimzi.io/discovery , is set as true for the Service resources. The service discovery annotation has the same key, providing connection details in JSON format for each service. Example internal Kafka bootstrap service apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { "port" : 9092, "tls" : false, "protocol" : "kafka", "auth" : "scram-sha-512" }, { "port" : 9093, "tls" : true, "protocol" : "kafka", "auth" : "tls" } ] labels: strimzi.io/cluster: my-cluster strimzi.io/discovery: "true" strimzi.io/kind: Kafka strimzi.io/name: my-cluster-kafka-bootstrap name: my-cluster-kafka-bootstrap spec: #... Example Kafka Bridge service apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { "port" : 8080, "tls" : false, "auth" : "none", "protocol" : "http" } ] labels: strimzi.io/cluster: my-bridge strimzi.io/discovery: "true" strimzi.io/kind: KafkaBridge strimzi.io/name: my-bridge-bridge-service Find services by specifying the discovery label when fetching services from the command line or a corresponding API call. Returning services using the discovery label oc get service -l strimzi.io/discovery=true Connection details are returned when retrieving the service discovery label. | [
"run kafka-producer -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server cluster-name -kafka-bootstrap:9092 --topic my-topic",
"run kafka-consumer -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server cluster-name -kafka-bootstrap:9092 --topic my-topic --from-beginning",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-certificate.crt key: my-key.key #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # listeners: 1 - name: external1 2 port: 9094 3 type: <listener_type> 4 tls: true 5 authentication: type: tls 6 configuration: 7 # authorization: 8 type: simple superUsers: - super-user-name 9 #",
"apply -f <kafka_configuration_file>",
"get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==\" <listener_name> \")].bootstrapServers}{\"\\n\"}'",
"get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external\")].bootstrapServers}{\"\\n\"}'",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster 1 spec: authentication: type: tls 2 authorization: type: simple acls: 3 - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read - resource: type: group name: my-group patternType: literal operations: - Read",
"apply -f USER-CONFIG-FILE",
"apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store",
"get secret <cluster_name> -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt",
"get secret <user_name> -o jsonpath='{.data.user\\.crt}' | base64 -d > user.crt",
"get secret <user_name> -o jsonpath='{.data.user\\.key}' | base64 -d > user.key",
"props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, \" <hostname>:<port> \");",
"props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, \"SSL\"); props.put(SslConfigs.SSL_TRUSTSTORE_TYPE_CONFIG, \"PEM\"); props.put(SslConfigs.SSL_TRUSTSTORE_CERTIFICATES_CONFIG, \" <ca.crt_file_content> \");",
"props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, \"SSL\"); props.put(SslConfigs.SSL_KEYSTORE_TYPE_CONFIG, \"PEM\"); props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, \" <user.crt_file_content> \"); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, \" <user.key_file_content> \");",
"props.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, \"-----BEGIN CERTIFICATE----- \\n <user_certificate_content_line_1> \\n <user_certificate_content_line_n> \\n-----END CERTIFICATE---\"); props.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, \"----BEGIN PRIVATE KEY-----\\n <user_key_content_line_1> \\n <user_key_content_line_n> \\n-----END PRIVATE KEY-----\");",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external4 port: 9094 type: nodeport tls: true authentication: type: tls # # zookeeper: #",
"apply -f <kafka_configuration_file>",
"NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external4-0 NodePort 172.30.55.13 9094:31789/TCP my-cluster-kafka-external4-1 NodePort 172.30.250.248 9094:30028/TCP my-cluster-kafka-external4-2 NodePort 172.30.115.81 9094:32650/TCP my-cluster-kafka-external4-bootstrap NodePort 172.30.30.23 9094:32650/TCP",
"status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready kafkaVersion: 3.7.0 listeners: # - addresses: - host: ip-10-0-224-199.us-west-2.compute.internal port: 32650 bootstrapServers: 'ip-10-0-224-199.us-west-2.compute.internal:32650' certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external4 observedGeneration: 2 operatorLastSuccessfulVersion: 2.7 #",
"get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external4\")].bootstrapServers}{\"\\n\"}' ip-10-0-224-199.us-west-2.compute.internal:32650",
"get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls # # zookeeper: #",
"apply -f <kafka_configuration_file>",
"NAME TYPE CLUSTER-IP PORT(S) my-cluster-kafka-external3-0 LoadBalancer 172.30.204.234 9094:30011/TCP my-cluster-kafka-external3-1 LoadBalancer 172.30.164.89 9094:32544/TCP my-cluster-kafka-external3-2 LoadBalancer 172.30.73.151 9094:32504/TCP my-cluster-kafka-external3-bootstrap LoadBalancer 172.30.30.228 9094:30371/TCP NAME EXTERNAL-IP (loadbalancer) my-cluster-kafka-external3-0 a8a519e464b924000b6c0f0a05e19f0d-1132975133.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-1 ab6adc22b556343afb0db5ea05d07347-611832211.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-2 a9173e8ccb1914778aeb17eca98713c0-777597560.us-west-2.elb.amazonaws.com my-cluster-kafka-external3-bootstrap a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com",
"status: clusterId: Y_RJQDGKRXmNF7fEcWldJQ conditions: - lastTransitionTime: '2023-01-31T14:59:37.113630Z' status: 'True' type: Ready kafkaVersion: 3.7.0 listeners: # - addresses: - host: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com port: 9094 bootstrapServers: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- name: external3 observedGeneration: 2 operatorLastSuccessfulVersion: 2.7 #",
"status: loadBalancer: ingress: - hostname: >- a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com #",
"get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external3\")].bootstrapServers}{\"\\n\"}' a8d4a6fb363bf447fb6e475fc3040176-36312313.us-west-2.elb.amazonaws.com:9094",
"get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: external1 port: 9094 type: route tls: true 1 authentication: type: tls # # zookeeper: #",
"apply -f <kafka_configuration_file>",
"NAME HOST/PORT SERVICES PORT TERMINATION my-cluster-kafka-external1-0 my-cluster-kafka-external1-0-my-project.router.com my-cluster-kafka-external1-0 9094 passthrough my-cluster-kafka-external1-1 my-cluster-kafka-external1-1-my-project.router.com my-cluster-kafka-external1-1 9094 passthrough my-cluster-kafka-external1-2 my-cluster-kafka-external1-2-my-project.router.com my-cluster-kafka-external1-2 9094 passthrough my-cluster-kafka-external1-bootstrap my-cluster-kafka-external1-bootstrap-my-project.router.com my-cluster-kafka-external1-bootstrap 9094 passthrough",
"status: ingress: - host: >- my-cluster-kafka-external1-bootstrap-my-project.router.com #",
"openssl s_client -connect my-cluster-kafka-external1-0-my-project.router.com:443 -servername my-cluster-kafka-external1-0-my-project.router.com -showcerts",
"Certificate chain 0 s:O = io.strimzi, CN = my-cluster-kafka i:O = io.strimzi, CN = cluster-ca v0",
"get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external1\")].bootstrapServers}{\"\\n\"}' my-cluster-kafka-external1-bootstrap-my-project.router.com:443",
"get secret my-cluster-cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt",
"apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 9092, \"tls\" : false, \"protocol\" : \"kafka\", \"auth\" : \"scram-sha-512\" }, { \"port\" : 9093, \"tls\" : true, \"protocol\" : \"kafka\", \"auth\" : \"tls\" } ] labels: strimzi.io/cluster: my-cluster strimzi.io/discovery: \"true\" strimzi.io/kind: Kafka strimzi.io/name: my-cluster-kafka-bootstrap name: my-cluster-kafka-bootstrap spec: #",
"apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 8080, \"tls\" : false, \"auth\" : \"none\", \"protocol\" : \"http\" } ] labels: strimzi.io/cluster: my-bridge strimzi.io/discovery: \"true\" strimzi.io/kind: KafkaBridge strimzi.io/name: my-bridge-bridge-service",
"get service -l strimzi.io/discovery=true"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/deploy-client-access-str |
Logging | Logging OpenShift Container Platform 4.18 Configuring and using logging in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408",
"__error__ JSONParserErr __error_details__ Value looks like object, but can't find closing '}' symbol",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 spec: serviceAccount: name: collector outputs: - name: loki-otlp type: lokiStack 2 lokiStack: target: name: logging-loki namespace: openshift-logging dataModel: Otel 3 authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: my-pipeline inputRefs: - application - infrastructure outputRefs: - loki-otlp",
"oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 name: clf-otlp spec: serviceAccount: name: <service_account_name> outputs: - name: otlp type: otlp otlp: tuning: compression: gzip deliveryMode: AtLeastOnce maxRetryDuration: 20 maxWrite: 10M minRetryDuration: 5 url: <otlp_url> 2 pipelines: - inputRefs: - application - infrastructure - audit name: otlp-logs outputRefs: - otlp",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: managementState: Managed outputs: - name: <output_name> type: http http: headers: 1 h1: v1 h2: v2 authentication: username: key: username secretName: <http_auth_secret> password: key: password secretName: <http_auth_secret> timeout: 300 proxyURL: <proxy_url> 2 url: <url> 3 tls: insecureSkipVerify: 4 ca: key: <ca_certificate> secretName: <secret_name> 5 pipelines: - inputRefs: - application name: pipe1 outputRefs: - <output_name> 6 serviceAccount: name: <service_account_name> 7",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector spec: managementState: Managed outputs: - name: rsyslog-east 1 syslog: appName: <app_name> 2 enrichment: KubernetesMinimal facility: <facility_value> 3 msgId: <message_ID> 4 payloadKey: <record_field> 5 procId: <process_ID> 6 rfc: <RFC3164_or_RFC5424> 7 severity: informational 8 tuning: deliveryMode: <AtLeastOnce_or_AtMostOnce> 9 url: <url> 10 tls: 11 ca: key: ca-bundle.crt secretName: syslog-secret type: syslog pipelines: - inputRefs: 12 - application name: syslog-east 13 outputRefs: - rsyslog-east serviceAccount: 14 name: logcollector",
"oc create -f <filename>.yaml",
"spec: outputs: - name: syslogout syslog: enrichment: KubernetesMinimal: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.example.com:6514 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout",
"2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: {...}",
"2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...}",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: \"^open\" - test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1 type: application",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4 type: application",
"oc apply -f <filename>.yaml",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\"},\"type\":\"memberlist\"}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"oc get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"oc delete pvc <pvc_name> -n openshift-logging",
"oc delete pod <pod_name> -n openshift-logging",
"oc patch pvc <pvc_name> -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"spec: storage: schemas: - version: v13 effectiveDate: 2024-10-25",
"spec: limits: global: otlp: {} 1 tenants: application: 2 otlp: {}",
"spec: limits: global: otlp: streamLabels: resourceAttributes: - name: \"k8s.namespace.name\" - name: \"k8s.pod.name\" - name: \"k8s.container.name\"",
"spec: limits: global: otlp: streamLabels: drop: resourceAttributes: - name: \"process.command_line\" - name: \"k8s\\\\.pod\\\\.labels\\\\..+\" regex: true scopeAttributes: - name: \"service.name\" logAttributes: - name: \"http.route\"",
"spec: tenants: mode: openshift-logging openshift: otlp: disableRecommendedAttributes: true 1",
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 spec: serviceAccount: name: collector outputs: - name: loki-otlp type: lokiStack 2 lokiStack: target: name: logging-loki namespace: openshift-logging dataModel: Otel 3 authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: my-pipeline inputRefs: - application - infrastructure outputRefs: - loki-otlp",
"oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 name: clf-otlp spec: serviceAccount: name: <service_account_name> outputs: - name: otlp type: otlp otlp: tuning: compression: gzip deliveryMode: AtLeastOnce maxRetryDuration: 20 maxWrite: 10M minRetryDuration: 5 url: <otlp_url> 2 pipelines: - inputRefs: - application - infrastructure - audit name: otlp-logs outputRefs: - otlp",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: \"^open\" - test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1 type: application",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4 type: application",
"oc apply -f <filename>.yaml",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\"},\"type\":\"memberlist\"}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"oc get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"oc delete pvc <pvc_name> -n openshift-logging",
"oc delete pod <pod_name> -n openshift-logging",
"oc patch pvc <pvc_name> -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"spec: storage: schemas: - version: v13 effectiveDate: 2024-10-25",
"spec: limits: global: otlp: {} 1 tenants: application: otlp: {} 2",
"spec: limits: global: otlp: streamLabels: resourceAttributes: - name: \"k8s.namespace.name\" - name: \"k8s.pod.name\" - name: \"k8s.container.name\"",
"spec: limits: global: otlp: streamLabels: structuredMetadata: resourceAttributes: - name: \"process.command_line\" - name: \"k8s\\\\.pod\\\\.labels\\\\..+\" regex: true scopeAttributes: - name: \"service.name\" logAttributes: - name: \"http.route\"",
"spec: tenants: mode: openshift-logging openshift: otlp: disableRecommendedAttributes: true 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/logging/cluster-logging-about_cluster-logging |
Performing a minor update of Red Hat OpenStack Platform | Performing a minor update of Red Hat OpenStack Platform Red Hat OpenStack Platform 17.1 Apply the latest bug fixes and security improvements to Red Hat OpenStack Platform OpenStack Documentation Team [email protected] Abstract You can perform a minor update of your Red Hat OpenStack Platform (RHOSP) environment to keep it updated with the latest packages and containers. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/performing_a_minor_update_of_red_hat_openstack_platform/index |
Deploying RHEL 9 on Google Cloud Platform | Deploying RHEL 9 on Google Cloud Platform Red Hat Enterprise Linux 9 Obtaining RHEL system images and creating RHEL instances on GCP Red Hat Customer Content Services | [
"provider = \"gcp\" [settings] bucket = \"GCP_BUCKET\" region = \"GCP_STORAGE_REGION\" object = \"OBJECT_KEY\" credentials = \"GCP_CREDENTIALS\"",
"sudo base64 -w 0 cee-gcp-nasa-476a1fa485b7.json",
"sudo composer-cli compose start BLUEPRINT-NAME gce IMAGE_KEY gcp-config.toml",
"sudo composer-cli compose status",
"base64 -w 0 \"USD{GOOGLE_APPLICATION_CREDENTIALS}\"",
"provider = \"gcp\" [settings] provider = \"gcp\" [settings] credentials = \"GCP_CREDENTIALS\"",
"[gcp] credentials = \" PATH_TO_GCP_ACCOUNT_CREDENTIALS \"",
"virt-install --name kvmtest --memory 2048 --vcpus 2 --cdrom /home/username/Downloads/rhel9.iso,bus=virtio --os-variant=rhel9.0",
"subscription-manager register --auto-attach",
"dnf install cloud-init systemctl enable --now cloud-init.service",
"gcloud projects create my-gcp-project3 --name project3",
"ssh-keygen -t rsa -f ~/.ssh/google_compute_engine",
"ssh -i ~/.ssh/google_compute_engine <username> @ <instance_external_ip>",
"gcloud auth login",
"gsutil mb gs://bucket_name",
"qemu-img convert -f qcow2 -O raw rhel-9.0-sample.qcow2 disk.raw",
"tar --format=oldgnu -Sczf disk.raw.tar.gz disk.raw",
"gsutil cp disk.raw.tar.gz gs://bucket_name",
"gcloud compute images create my-image-name --source-uri gs://my-bucket-name/disk.raw.tar.gz",
"gcloud compute instances create myinstance3 --zone=us-central1-a --image test-iso2-image",
"gcloud compute instances list",
"ssh -i ~/.ssh/google_compute_engine <user_name>@<instance_external_ip>",
"subscription-manager register --auto-attach",
"insights-client register --display-name <display-name-value>",
"gcloud auth login",
"gsutil mb gs:// BucketName",
"gsutil mb gs://rhel-ha-bucket",
"qemu-img convert -f qcow2 ImageName .qcow2 -O raw disk.raw",
"tar -Sczf ImageName .tar.gz disk.raw",
"gsutil cp ImageName .tar.gz gs:// BucketName",
"gcloud compute images create BaseImageName --source-uri gs:// BucketName / BaseImageName .tar.gz",
"[admin@localhost ~] USD gcloud compute images create rhel-76-server --source-uri gs://user-rhelha/rhel-server-76.tar.gz Created [https://www.googleapis.com/compute/v1/projects/MyProject/global/images/rhel-server-76]. NAME PROJECT FAMILY DEPRECATED STATUS rhel-76-server rhel-ha-testing-on-gcp READY",
"gcloud compute instances create BaseInstanceName --can-ip-forward --machine-type n1-standard-2 --image BaseImageName --service-account ServiceAccountEmail",
"[admin@localhost ~] USD gcloud compute instances create rhel-76-server-base-instance --can-ip-forward --machine-type n1-standard-2 --image rhel-76-server --service-account [email protected] Created [https://www.googleapis.com/compute/v1/projects/rhel-ha-testing-on-gcp/zones/us-east1-b/instances/rhel-76-server-base-instance]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS rhel-76-server-base-instance us-east1-bn1-standard-2 10.10.10.3 192.227.54.211 RUNNING",
"ssh root@PublicIPaddress",
"subscription-manager repos --disable= *",
"subscription-manager repos --enable=rhel-9-server-rpms",
"dnf update -y",
"metadata.google.internal iburst Google NTP server",
"rm -f /etc/udev/rules.d/70-persistent-net.rules rm -f /etc/udev/rules.d/75-persistent-net-generator.rules",
"chkconfig network on",
"systemctl enable sshd systemctl is-enabled sshd",
"ln -sf /usr/share/zoneinfo/UTC /etc/localtime",
"Server times out connections after several minutes of inactivity. Keep alive ssh connections by sending a packet every 7 minutes. ServerAliveInterval 420",
"PermitRootLogin no PasswordAuthentication no AllowTcpForwarding yes X11Forwarding no PermitTunnel no Compute times out connections after 10 minutes of inactivity. Keep ssh connections alive by sending a packet every 7 minutes. ClientAliveInterval 420",
"ssh_pwauth from 1 to 0. ssh_pwauth: 0",
"subscription-manager unregister",
"export HISTSIZE=0",
"sync",
"gcloud compute disks snapshot InstanceName --snapshot-names SnapshotName",
"gcloud compute images create ConfiguredImageFromSnapshot --source-snapshot SnapshotName",
"gcloud compute instance-templates create InstanceTemplateName --can-ip-forward --machine-type n1-standard-2 --image ConfiguredImageFromSnapshot --service-account ServiceAccountEmailAddress",
"[admin@localhost ~] USD gcloud compute instance-templates create rhel-91-instance-template --can-ip-forward --machine-type n1-standard-2 --image rhel-91-gcp-image --service-account [email protected] Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/global/instanceTemplates/rhel-91-instance-template]. NAME MACHINE_TYPE PREEMPTIBLE CREATION_TIMESTAMP rhel-91-instance-template n1-standard-2 2018-07-25T11:09:30.506-07:00",
"gcloud compute instances create NodeName01 NodeName02 --source-instance-template InstanceTemplateName --zone RegionZone --network= NetworkName --subnet= SubnetName",
"[admin@localhost ~] USD gcloud compute instances create rhel81-node-01 rhel81-node-02 rhel81-node-03 --source-instance-template rhel-91-instance-template --zone us-west1-b --network=projectVPC --subnet=range0 Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/zones/us-west1-b/instances/rhel81-node-01]. Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/zones/us-west1-b/instances/rhel81-node-02]. Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/zones/us-west1-b/instances/rhel81-node-03]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS rhel81-node-01 us-west1-b n1-standard-2 10.10.10.4 192.230.25.81 RUNNING rhel81-node-02 us-west1-b n1-standard-2 10.10.10.5 192.230.81.253 RUNNING rhel81-node-03 us-east1-b n1-standard-2 10.10.10.6 192.230.102.15 RUNNING",
"subscription-manager repos --disable= *",
"subscription-manager repos --enable=rhel-9-server-rpms subscription-manager repos --enable=rhel-9-for-x86_64-highavailability-rpms",
"dnf install -y pcs pacemaker fence-agents-gce resource-agents-gcp",
"dnf update -y",
"passwd hacluster",
"firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload",
"systemctl start pcsd.service systemctl enable pcsd.service Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.",
"systemctl status pcsd.service pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2018-06-25 19:21:42 UTC; 15s ago Docs: man:pcsd(8) man:pcs(8) Main PID: 5901 (pcsd) CGroup: /system.slice/pcsd.service └─5901 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null &",
"pcs host auth hostname1 hostname2 hostname3 Username: hacluster Password: hostname1 : Authorized hostname2 : Authorized hostname3 : Authorized",
"pcs cluster setup cluster-name hostname1 hostname2 hostname3",
"pcs cluster enable --all",
"pcs cluster start --all",
"fence_gce --zone us-west1-b --project=rhel-ha-on-gcp -o list",
"fence_gce --zone us-west1-b --project=rhel-ha-testing-on-gcp -o list 4435801234567893181,InstanceName-3 4081901234567896811,InstanceName-1 7173601234567893341,InstanceName-2",
"pcs stonith create FenceDeviceName fence_gce zone= Region-Zone project= MyProject",
"pcs status",
"pcs status Cluster name: gcp-cluster Stack: corosync Current DC: rhel81-node-02 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 27 12:53:25 2018 Last change: Fri Jul 27 12:51:43 2018 by root via cibadmin on rhel81-node-01 3 nodes configured 3 resources configured Online: [ rhel81-node-01 rhel81-node-02 rhel81-node-03 ] Full list of resources: us-west1-b-fence (stonith:fence_gce): Started rhel81-node-01 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled",
"pcs resource describe gcp-vpc-move-vip",
"pcs resource create aliasip gcp-vpc-move-vip alias_ip= UnusedIPaddress/CIDRblock",
"pcs resource create aliasip gcp-vpc-move-vip alias_ip=10.10.10.200/32",
"pcs resource create vip IPaddr2 nic= interface ip= AliasIPaddress cidr_netmask=32",
"pcs resource create vip IPaddr2 nic=eth0 ip=10.10.10.200 cidr_netmask=32",
"pcs resource group add vipgrp aliasip vip",
"pcs status",
"pcs resource move vip Node",
"pcs resource move vip rhel81-node-03",
"pcs status",
"gcloud compute networks subnets update SubnetName --region RegionName --add-secondary-ranges SecondarySubnetName = SecondarySubnetRange",
"gcloud compute networks subnets update range0 --region us-west1 --add-secondary-ranges range1=10.10.20.0/24",
"pcs resource create aliasip gcp-vpc-move-vip alias_ip= UnusedIPaddress/CIDRblock",
"pcs resource create aliasip gcp-vpc-move-vip alias_ip=10.10.20.200/32",
"pcs resource create vip IPaddr2 nic= interface ip= AliasIPaddress cidr_netmask=32",
"pcs resource create vip IPaddr2 nic=eth0 ip=10.10.20.200 cidr_netmask=32",
"pcs resource group add vipgrp aliasip vip",
"pcs status",
"pcs resource move vip Node",
"pcs resource move vip rhel81-node-03",
"pcs status",
"mokutil --sb-state SecureBoot enabled",
"sudo keyctl list %:.platform 4 keys in keyring: 12702216: ---lswrv 0 0 asymmetric: Microsoft Corporation UEFI CA 2011: 13adbf4309bd82709c8cd54f316ed522988a1bd4 50338534: ---lswrv 0 0 asymmetric: Red Hat Secure Boot CA 5: cc6fa5e72868ba494e939bbd680b9144769a9f8f 681047026: ---lswrv 0 0 asymmetric: Microsoft Windows Production PCA 2011: a92902398e16c49778cd90f99e4f9ae17c55af53",
"uuidgen --random > GUID.txt",
"openssl req -quiet -newkey rsa:4096 -nodes -keyout PK.key -new -x509 -sha256 -days 3650 -subj \"/CN=Platform key/\" -outform DER -out PK.cer",
"openssl req -quiet -newkey rsa:4096 -nodes -keyout KEK.key -new -x509 -sha256 -days 3650 -subj \"/CN=Key Exchange Key/\" -outform DER -out KEK.cer",
"openssl req -quiet -newkey rsa:4096 -nodes -keyout custom_db.key -new -x509 -sha256 -days 3650 -subj \"/CN=Signature Database key/\" --outform DER -out custom_db.cer",
"wget https://go.microsoft.com/fwlink/p/?linkid=321194 --user-agent=\"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36\" -O MicCorUEFCA2011_2011-06-27.crt",
"wget https://uefi.org/sites/default/files/resources/x64_DBXUpdate.bin",
"gcloud compute images create <example-rhel-9-efi-image> --source-image projects/ <example_project_id> /global/images/ <example_image_name> --platform-key-file=PK.cer --key-exchange-key-file=KEK.cer --signature-database-file=custom_db.cer,MicCorUEFCA2011_2011-06-27.crt --forbidden-database-file x64_DBXUpdate.bin --guest-os-features=\"UEFI_COMPATIBLE\"",
"mokutil --sb-state SecureBoot enabled",
"sudo keyctl list %:.platform 757453569: ---lswrv 0 0 asymmetric: Signature Database key: f064979641c24e1b935e402bdbc3d5c4672a1acc"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/deploying_rhel_9_on_google_cloud_platform/index |
Chapter 20. Internationalization | Chapter 20. Internationalization 20.1. Red Hat Enterprise Linux 7 International Languages Red Hat Enterprise Linux 7 supports the installation of multiple languages and the changing of languages based on your requirements. The following languages are supported in Red Hat Enterprise Linux 7: East Asian Languages - Japanese, Korean, Simplified Chinese, and Traditional Chinese; European Languages - English, German, Spanish, French, Italian, Portuguese Brazilian, and Russian. Indic Languages - Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Odia, Punjabi, Tamil, and Telugu. The table below summarizes the currently supported languages, their locales, default fonts installed, and packages required for some of the supported languages. For more information on font configuration, see Desktop Migration and Administration Guide . Table 20.1. Language Support Matrix Territory Language Locale Default Font (Font Package) Input Methods Brazil Portuguese pt_BR.UTF-8 DejaVu Sans (dejavu-sans-fonts) France French fr_FR.UTF-8 DejaVu Sans (dejavu-sans-fonts) Germany German de_DE.UTF-8 DejaVu Sans (dejavu-sans-fonts) Italy Italian it_IT.UTF-8 DejaVu Sans (dejavu-sans-fonts) Russia Russian ru_RU.UTF-8 DejaVu Sans (dejavu-sans-fonts) Spain Spanish es_ES.UTF-8 DejaVu Sans (dejavu-sans-fonts) USA English en_US.UTF-8 DejaVu Sans (dejavu-sans-fonts) China Simplified Chinese zh_CN.UTF-8 WenQuanYi Zen Hei Sharp (wqy-zenhei-fonts) ibus-libpinyin, ibus-table-chinese Japan Japanese ja_JP.UTF-8 VL PGothic (vlgothic-p-fonts) ibus-kkc Korea Korean ko_KR.UTF-8 NanumGothic (nhn-nanum-gothic-fonts) ibus-hangul Taiwan Traditional Chinese zh_TW.UTF-8 AR PL UMing TW (cjkuni-uming-fonts) ibus-chewing, ibus-table-chinese India Assamese as_IN.UTF-8 Lohit Assamese (lohit-assamese-fonts) ibus-m17n, m17n-db, m17n-contrib Bengali bn_IN.UTF-8 Lohit Bengali (lohit-bengali-fonts) ibus-m17n, m17n-db, m17n-contrib Gujarati gu_IN.UTF-8 Lohit Gujarati (lohit-gujarati-fonts) ibus-m17n, m17n-db, m17n-contrib Hindi hi_IN.UTF-8 Lohit Hindi (lohit-devanagari-fonts) ibus-m17n, m17n-db, m17n-contrib Kannada kn_IN.UTF-8 Lohit Kannada (lohit-kannada-fonts) ibus-m17n, m17n-db, m17n-contrib Malayalam ml_IN.UTF-8 Meera (smc-meera-fonts) ibus-m17n, m17n-db, m17n-contrib Marathi mr_IN.UTF-8 Lohit Marathi (lohit-marathi-fonts) ibus-m17n, m17n-db, m17n-contrib Odia or_IN.UTF-8 Lohit Oriya (lohit-oriya-fonts) ibus-m17n, m17n-db, m17n-contrib Punjabi pa_IN.UTF-8 Lohit Punjabi (lohit-punjabi-fonts) ibus-m17n, m17n-db, m17n-contrib Tamil ta_IN.UTF-8 Lohit Tamil (lohit-tamil-fonts) ibus-m17n, m17n-db, m17n-contrib Telugu te_IN.UTF-8 Lohit Telugu (lohit-telugu-fonts) ibus-m17n, m17n-db, m17n-contrib | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-internationalization |
Chapter 3. CustomResourceDefinition [apiextensions.k8s.io/v1] | Chapter 3. CustomResourceDefinition [apiextensions.k8s.io/v1] Description CustomResourceDefinition represents a resource that should be exposed on the API server. Its name MUST be in the format <.spec.name>.<.spec.group>. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CustomResourceDefinitionSpec describes how a user wants their resource to appear status object CustomResourceDefinitionStatus indicates the state of the CustomResourceDefinition 3.1.1. .spec Description CustomResourceDefinitionSpec describes how a user wants their resource to appear Type object Required group names scope versions Property Type Description conversion object CustomResourceConversion describes how to convert different versions of a CR. group string group is the API group of the defined custom resource. The custom resources are served under /apis/<group>/... . Must match the name of the CustomResourceDefinition (in the form <names.plural>.<group> ). names object CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition preserveUnknownFields boolean preserveUnknownFields indicates that object fields which are not specified in the OpenAPI schema should be preserved when persisting to storage. apiVersion, kind, metadata and known fields inside metadata are always preserved. This field is deprecated in favor of setting x-preserve-unknown-fields to true in spec.versions[*].schema.openAPIV3Schema . See https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#pruning-versus-preserving-unknown-fields for details. scope string scope indicates whether the defined custom resource is cluster- or namespace-scoped. Allowed values are Cluster and Namespaced . versions array versions is the list of all API versions of the defined custom resource. Version names are used to compute the order in which served versions are listed in API discovery. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10. versions[] object CustomResourceDefinitionVersion describes a version for CRD. 3.1.2. .spec.conversion Description CustomResourceConversion describes how to convert different versions of a CR. Type object Required strategy Property Type Description strategy string strategy specifies how custom resources are converted between versions. Allowed values are: - None : The converter only change the apiVersion and would not touch any other field in the custom resource. - Webhook : API Server will call to an external webhook to do the conversion. Additional information is needed for this option. This requires spec.preserveUnknownFields to be false, and spec.conversion.webhook to be set. webhook object WebhookConversion describes how to call a conversion webhook 3.1.3. .spec.conversion.webhook Description WebhookConversion describes how to call a conversion webhook Type object Required conversionReviewVersions Property Type Description clientConfig object WebhookClientConfig contains the information to make a TLS connection with the webhook. conversionReviewVersions array (string) conversionReviewVersions is an ordered list of preferred ConversionReview versions the Webhook expects. The API server will use the first version in the list which it supports. If none of the versions specified in this list are supported by API server, conversion will fail for the custom resource. If a persisted Webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail. 3.1.4. .spec.conversion.webhook.clientConfig Description WebhookClientConfig contains the information to make a TLS connection with the webhook. Type object Property Type Description caBundle string caBundle is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used. service object ServiceReference holds a reference to Service.legacy.k8s.io url string url gives the location of the webhook, in standard URL form ( scheme://host:port/path ). Exactly one of url or service must be specified. The host should not refer to a service running in the cluster; use the service field instead. The host might be resolved via external DNS in some apiservers (e.g., kube-apiserver cannot resolve in-cluster DNS as that would be a layering violation). host may also be an IP address. Please note that using localhost or 127.0.0.1 as a host is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster. The scheme must be "https"; the URL must begin with "https://". A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier. Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#... ") and query parameters ("?... ") are not allowed, either. 3.1.5. .spec.conversion.webhook.clientConfig.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Required namespace name Property Type Description name string name is the name of the service. Required namespace string namespace is the namespace of the service. Required path string path is an optional URL path at which the webhook will be contacted. port integer port is an optional service port at which the webhook will be contacted. port should be a valid port number (1-65535, inclusive). Defaults to 443 for backward compatibility. 3.1.6. .spec.names Description CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition Type object Required plural kind Property Type Description categories array (string) categories is a list of grouped resources this custom resource belongs to (e.g. 'all'). This is published in API discovery documents, and used by clients to support invocations like kubectl get all . kind string kind is the serialized kind of the resource. It is normally CamelCase and singular. Custom resource instances will use this value as the kind attribute in API calls. listKind string listKind is the serialized kind of the list for this resource. Defaults to "`kind`List". plural string plural is the plural name of the resource to serve. The custom resources are served under /apis/<group>/<version>/... /<plural> . Must match the name of the CustomResourceDefinition (in the form <names.plural>.<group> ). Must be all lowercase. shortNames array (string) shortNames are short names for the resource, exposed in API discovery documents, and used by clients to support invocations like kubectl get <shortname> . It must be all lowercase. singular string singular is the singular name of the resource. It must be all lowercase. Defaults to lowercased kind . 3.1.7. .spec.versions Description versions is the list of all API versions of the defined custom resource. Version names are used to compute the order in which served versions are listed in API discovery. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10. Type array 3.1.8. .spec.versions[] Description CustomResourceDefinitionVersion describes a version for CRD. Type object Required name served storage Property Type Description additionalPrinterColumns array additionalPrinterColumns specifies additional columns returned in Table output. See https://kubernetes.io/docs/reference/using-api/api-concepts/#receiving-resources-as-tables for details. If no columns are specified, a single column displaying the age of the custom resource is used. additionalPrinterColumns[] object CustomResourceColumnDefinition specifies a column for server side printing. deprecated boolean deprecated indicates this version of the custom resource API is deprecated. When set to true, API requests to this version receive a warning header in the server response. Defaults to false. deprecationWarning string deprecationWarning overrides the default warning returned to API clients. May only be set when deprecated is true. The default warning indicates this version is deprecated and recommends use of the newest served version of equal or greater stability, if one exists. name string name is the version name, e.g. "v1", "v2beta1", etc. The custom resources are served under this version at /apis/<group>/<version>/... if served is true. schema object CustomResourceValidation is a list of validation methods for CustomResources. served boolean served is a flag enabling/disabling this version from being served via REST APIs storage boolean storage indicates this version should be used when persisting custom resources to storage. There must be exactly one version with storage=true. subresources object CustomResourceSubresources defines the status and scale subresources for CustomResources. 3.1.9. .spec.versions[].additionalPrinterColumns Description additionalPrinterColumns specifies additional columns returned in Table output. See https://kubernetes.io/docs/reference/using-api/api-concepts/#receiving-resources-as-tables for details. If no columns are specified, a single column displaying the age of the custom resource is used. Type array 3.1.10. .spec.versions[].additionalPrinterColumns[] Description CustomResourceColumnDefinition specifies a column for server side printing. Type object Required name type jsonPath Property Type Description description string description is a human readable description of this column. format string format is an optional OpenAPI type definition for this column. The 'name' format is applied to the primary identifier column to assist in clients identifying column is the resource name. See https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types for details. jsonPath string jsonPath is a simple JSON path (i.e. with array notation) which is evaluated against each custom resource to produce the value for this column. name string name is a human readable name for the column. priority integer priority is an integer defining the relative importance of this column compared to others. Lower numbers are considered higher priority. Columns that may be omitted in limited space scenarios should be given a priority greater than 0. type string type is an OpenAPI type definition for this column. See https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types for details. 3.1.11. .spec.versions[].schema Description CustomResourceValidation is a list of validation methods for CustomResources. Type object Property Type Description openAPIV3Schema `` openAPIV3Schema is the OpenAPI v3 schema to use for validation and pruning. 3.1.12. .spec.versions[].subresources Description CustomResourceSubresources defines the status and scale subresources for CustomResources. Type object Property Type Description scale object CustomResourceSubresourceScale defines how to serve the scale subresource for CustomResources. status object CustomResourceSubresourceStatus defines how to serve the status subresource for CustomResources. Status is represented by the .status JSON path inside of a CustomResource. When set, * exposes a /status subresource for the custom resource * PUT requests to the /status subresource take a custom resource object, and ignore changes to anything except the status stanza * PUT/POST/PATCH requests to the custom resource ignore changes to the status stanza 3.1.13. .spec.versions[].subresources.scale Description CustomResourceSubresourceScale defines how to serve the scale subresource for CustomResources. Type object Required specReplicasPath statusReplicasPath Property Type Description labelSelectorPath string labelSelectorPath defines the JSON path inside of a custom resource that corresponds to Scale status.selector . Only JSON paths without the array notation are allowed. Must be a JSON Path under .status or .spec . Must be set to work with HorizontalPodAutoscaler. The field pointed by this JSON path must be a string field (not a complex selector struct) which contains a serialized label selector in string form. More info: https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions#scale-subresource If there is no value under the given path in the custom resource, the status.selector value in the /scale subresource will default to the empty string. specReplicasPath string specReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale spec.replicas . Only JSON paths without the array notation are allowed. Must be a JSON Path under .spec . If there is no value under the given path in the custom resource, the /scale subresource will return an error on GET. statusReplicasPath string statusReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale status.replicas . Only JSON paths without the array notation are allowed. Must be a JSON Path under .status . If there is no value under the given path in the custom resource, the status.replicas value in the /scale subresource will default to 0. 3.1.14. .spec.versions[].subresources.status Description CustomResourceSubresourceStatus defines how to serve the status subresource for CustomResources. Status is represented by the .status JSON path inside of a CustomResource. When set, * exposes a /status subresource for the custom resource * PUT requests to the /status subresource take a custom resource object, and ignore changes to anything except the status stanza * PUT/POST/PATCH requests to the custom resource ignore changes to the status stanza Type object 3.1.15. .status Description CustomResourceDefinitionStatus indicates the state of the CustomResourceDefinition Type object Property Type Description acceptedNames object CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition conditions array conditions indicate state for particular aspects of a CustomResourceDefinition conditions[] object CustomResourceDefinitionCondition contains details for the current condition of this pod. storedVersions array (string) storedVersions lists all versions of CustomResources that were ever persisted. Tracking these versions allows a migration path for stored versions in etcd. The field is mutable so a migration controller can finish a migration to another version (ensuring no old objects are left in storage), and then remove the rest of the versions from this list. Versions may not be removed from spec.versions while they exist in this list. 3.1.16. .status.acceptedNames Description CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition Type object Required plural kind Property Type Description categories array (string) categories is a list of grouped resources this custom resource belongs to (e.g. 'all'). This is published in API discovery documents, and used by clients to support invocations like kubectl get all . kind string kind is the serialized kind of the resource. It is normally CamelCase and singular. Custom resource instances will use this value as the kind attribute in API calls. listKind string listKind is the serialized kind of the list for this resource. Defaults to "`kind`List". plural string plural is the plural name of the resource to serve. The custom resources are served under /apis/<group>/<version>/... /<plural> . Must match the name of the CustomResourceDefinition (in the form <names.plural>.<group> ). Must be all lowercase. shortNames array (string) shortNames are short names for the resource, exposed in API discovery documents, and used by clients to support invocations like kubectl get <shortname> . It must be all lowercase. singular string singular is the singular name of the resource. It must be all lowercase. Defaults to lowercased kind . 3.1.17. .status.conditions Description conditions indicate state for particular aspects of a CustomResourceDefinition Type array 3.1.18. .status.conditions[] Description CustomResourceDefinitionCondition contains details for the current condition of this pod. Type object Required type status Property Type Description lastTransitionTime Time lastTransitionTime last time the condition transitioned from one status to another. message string message is a human-readable message indicating details about last transition. reason string reason is a unique, one-word, CamelCase reason for the condition's last transition. status string status is the status of the condition. Can be True, False, Unknown. type string type is the type of the condition. Types include Established, NamesAccepted and Terminating. 3.2. API endpoints The following API endpoints are available: /apis/apiextensions.k8s.io/v1/customresourcedefinitions DELETE : delete collection of CustomResourceDefinition GET : list or watch objects of kind CustomResourceDefinition POST : create a CustomResourceDefinition /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions GET : watch individual changes to a list of CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name} DELETE : delete a CustomResourceDefinition GET : read the specified CustomResourceDefinition PATCH : partially update the specified CustomResourceDefinition PUT : replace the specified CustomResourceDefinition /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions/{name} GET : watch changes to an object of kind CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name}/status GET : read status of the specified CustomResourceDefinition PATCH : partially update status of the specified CustomResourceDefinition PUT : replace status of the specified CustomResourceDefinition 3.2.1. /apis/apiextensions.k8s.io/v1/customresourcedefinitions Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CustomResourceDefinition Table 3.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 3.3. Body parameters Parameter Type Description body DeleteOptions schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CustomResourceDefinition Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinitionList schema 401 - Unauthorized Empty HTTP method POST Description create a CustomResourceDefinition Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.8. Body parameters Parameter Type Description body CustomResourceDefinition schema Table 3.9. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 202 - Accepted CustomResourceDefinition schema 401 - Unauthorized Empty 3.2.2. /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions Table 3.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the CustomResourceDefinition Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CustomResourceDefinition Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CustomResourceDefinition Table 3.17. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CustomResourceDefinition Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.19. Body parameters Parameter Type Description body Patch schema Table 3.20. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CustomResourceDefinition Table 3.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.22. Body parameters Parameter Type Description body CustomResourceDefinition schema Table 3.23. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty 3.2.4. /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions/{name} Table 3.24. Global path parameters Parameter Type Description name string name of the CustomResourceDefinition Table 3.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name}/status Table 3.27. Global path parameters Parameter Type Description name string name of the CustomResourceDefinition Table 3.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified CustomResourceDefinition Table 3.29. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CustomResourceDefinition Table 3.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.31. Body parameters Parameter Type Description body Patch schema Table 3.32. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CustomResourceDefinition Table 3.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.34. Body parameters Parameter Type Description body CustomResourceDefinition schema Table 3.35. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/extension_apis/customresourcedefinition-apiextensions-k8s-io-v1 |
Part VI. Manage | Part VI. Manage | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_amq_interconnect/management |
Chapter 25. Additional resources | Chapter 25. Additional resources Installing and configuring Red Hat Process Automation Manager in a Red Hat JBoss EAP clustered environment | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/additional_resources_2 |
Chapter 2. Persistent naming attributes | Chapter 2. Persistent naming attributes The way you identify and manage storage devices ensures the stability and predictability of the system. RHEL 9 uses two primary naming schemes for this purpose: traditional device names and persistent naming attributes. Traditional device names Traditional device names are determined by the Linux kernel based on the physical location of the device in the system. For example, the first SATA drive is usually labeled as /dev/sda , the second as /dev/sdb , and so on. While these names are straightforward, they are subject to change when devices are added or removed or when the hardware configuration is modified. This can pose challenges for scripting and configuration files. Furthermore, traditional names lack descriptive information about the purpose or characteristics of the device. Persistent naming attributes Persistent naming attributes (PNAs) are based on unique characteristics of the storage devices, making them more stable and predictable across system reboots. Implementing PNAs involves a more detailed initial configuration compared to traditional naming. One of the key benefits of PNAs is their resilience to changes in hardware configurations, making them ideal for maintaining consistent naming conventions. When using PNAs, you can reference storage devices within scripts, configuration files, and management tools without concerns about unexpected name changes. Additionally, PNAs often include valuable metadata, such as device type or manufacturer information, enhancing their descriptiveness for effective device identification and management. 2.1. Persistent attributes for identifying file systems and block devices In RHEL 9 storage, persistent naming attributes (PNAs) are mechanisms that provide consistent and reliable naming for storage devices across system reboots, hardware changes, or other events. These attributes are used to identify storage devices consistently, even if the storage devices are added, removed, or reconfigured. PNAs are used to identify both file systems and block devices, but they serve different purposes: Persistent attributes for identifying file systems Universally unique identifier (UUID) UUIDs are primarily used to uniquely identify file systems on storage devices. Each file system has its own UUID, and this identifier remains constant even if the file system is unmounted, remounted, or the device is detached and reattached. Label Labels are user-assigned names for file systems. While they can be used to identify and reference file systems, they are not as standardized as UUIDs. Labels are often used as alternatives to UUIDs to specify file systems in configuration files. When you assign a label to a file system, it becomes part of the file system metadata. This label persists with the file system even if you mount the file system on different mount points or different systems. Persistent attributes for identifying block devices Universally unique identifier (UUID) UUIDs can be used to identify storage block devices. When a storage device is formatted or when a file system is created on it, a UUID is often assigned to the device itself. This UUID is embedded within the file system metadata or partition table and is used as a reference for persistent device naming. It allows you to uniquely identify the block device, even if you change the file system or reformat it. World Wide Identifier (WWID) WWIDs are globally unique identifiers associated with storage block devices. They are commonly used in Fibre Channel Storage Area Networks (SANs) to identify Host Bus Adapters (HBAs) or network interfaces that connect servers to SAN storage devices. WWIDs ensure consistent communication between servers and SAN storage devices and help manage redundant paths to storage devices. Serial number The serial number is a unique identifier assigned to each storage block device by the manufacturer. It can be used to differentiate between storage devices and may be used in combination with other attributes like UUIDs or WWIDs for device management. 2.2. udev device naming rules You can define rules for assigning persistent names to devices with the userspace device manager ( udev ) subsystem. These rules are stored in a file with a .rules extension in the /etc/udev/rules.d/ directory. The purpose of these rules is to ensure that storage devices are consistently and predictably identified, even across system reboots and configuration changes. udev rules are written in a human-readable format using key-value pairs. When a device is detected or initialized, udev evaluates these rules sequentially, based on the order they are defined. The first matching rule is applied to the device, determining its name and how it will be identified within the system. In the case of storage devices, udev rules create symbolic links in the /dev/disk/ directory. These symbolic links provide user-friendly aliases for storage devices, making it more convenient to refer to and manage these devices. You can create custom udev rules to specify how devices should be named based on various attributes such as serial numbers, World Wide Name (WWN) identifiers, or other device-specific characteristics. By defining specific naming rules, you have precise control over how devices are identified within the system. There are two primary locations for udev rules: /lib/udev/rules.d/ directory contains default rules that come with the udev package. /etc/udev/rules.d directory is intended for custom udev rules. While udev rules are very flexible, it is important to be aware of udev limitations: Accessibility Timing: Some storage devices might not be accessible at the time of a udev query. Event-Based Processing: The kernel can send udev events at any time, potentially triggering rule processing and link removal if a device is inaccessible. Processing Delay: There might be a delay between event generation and processing, especially with numerous devices, causing a lag between kernel detection and link availability. Device Accessibility: External programs invoked by udev rules, like blkid , might briefly open the device, making it temporarily inaccessible for other tasks. Link Updates: Device names managed by udev in /dev/disk/ can change between major releases, requiring link updates. Additional resources udev man page on your system | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_storage_devices/persistent-naming-attributes_managing-storage-devices |
9.7. Large Objects (LOBs) | 9.7. Large Objects (LOBs) Large Objects (LOBs) consist of data. The three main large object runtime data types used by JBoss are: Binary (BLOB) Contains multimedia objects such as images and audio. Character (CLOB) Contains ASCII characters. Extensible Markup Language (XML) Contains textual data. LOBs and JBoss The JBoss Data Services Connector API returns a reference to the LOB if allowed by the JBoss Data Services server. The JBoss Data Services server or JDBC driver can then access the data via a stream rather than retrieving the data all at once. This is useful for several reasons: Reduces memory usage when returning the result set to the user. Improves performance by passing less data in the result set. Enables access to LOBs when required rather than assuming that users will always use the LOB data. Enables handling of arbitrarily large data values within a fixed JBoss Data Services memory usage. These benefits are achieved if the Connector itself does not materialize an entire LOB all at once. For example, the JDBC API supports a streaming interface for BLOB and CLOB data. Source LOB values are typically accessed by reference, rather than having the value copied to a temporary location. Care must be taken to ensure that source LOBs are returned in a memory-safe manner. LOBs are broken into pieces when being created and streamed. The size of each piece when fetched by the client can be configured. Cached lobs will be copied rather than relying on the reference to the source lob. Temporary lobs created by Teiid will be cleaned up when the result set or statement is closed. To rely on implicit garbage collection based cleanup instead of statement close, the Teiid session variable clean_lobs_onclose can be set to false (by issuing the query "SELECT teiid_session_set('clean_lobs_onclose', false)" - which can be done for example via the new connection sql in the datasource definition). This can be used for local client scenarios that relied on the implicit behavior, such as Designer generated REST VDBs. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/large_objects_lobs |
26.8. Sample Parameter File and CMS Configuration File | 26.8. Sample Parameter File and CMS Configuration File To change the parameter file, begin by extending the shipped generic.prm file. Example of generic.prm file: Example of redhat.conf file configuring a QETH network device (pointed to by CMSCONFFILE in generic.prm ): | [
"root=\"/dev/ram0\" ro ip=\"off\" ramdisk_size=\"40000\" cio_ignore=\"all,!0.0.0009\" CMSDASD=\"191\" CMSCONFFILE=\"redhat.conf\" vnc",
"NETTYPE=\"qeth\" SUBCHANNELS=\"0.0.0600,0.0.0601,0.0.0602\" PORTNAME=\"FOOBAR\" PORTNO=\"0\" LAYER2=\"1\" MACADDR=\"02:00:be:3a:01:f3\" HOSTNAME=\"foobar.systemz.example.com\" IPADDR=\"192.168.17.115\" NETMASK=\"255.255.255.0\" GATEWAY=\"192.168.17.254\" DNS=\"192.168.17.1\" SEARCHDNS=\"systemz.example.com:example.com\" DASD=\"200-203\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-parmfiles-sample_files |
Chapter 16. Creating a performance profile | Chapter 16. Creating a performance profile Learn about the Performance Profile Creator (PPC) and how you can use it to create a performance profile. 16.1. About the Performance Profile Creator The Performance Profile Creator (PPC) is a command-line tool, delivered with the Performance Addon Operator, used to create the performance profile. The tool consumes must-gather data from the cluster and several user-supplied profile arguments. The PPC generates a performance profile that is appropriate for your hardware and topology. The tool is run by one of the following methods: Invoking podman Calling a wrapper script 16.1.1. Gathering data about your cluster using the must-gather command The Performance Profile Creator (PPC) tool requires must-gather data. As a cluster administrator, run the must-gather command to capture information about your cluster. Prerequisites Access to the cluster as a user with the cluster-admin role. Access to the Performance Addon Operator image. The OpenShift CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Run must-gather on your cluster: USD oc adm must-gather --image=<PAO_image> --dest-dir=<dir> Note The must-gather command must be run with the performance-addon-operator-must-gather image. The output can optionally be compressed. Compressed output is required if you are running the Performance Profile Creator wrapper script. Example USD oc adm must-gather --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.9 --dest-dir=must-gather Create a compressed file from the must-gather directory: USD tar cvaf must-gather.tar.gz must-gather/ 16.1.2. Running the Performance Profile Creator using podman As a cluster administrator, you can run podman and the Performance Profile Creator to create a performance profile. Prerequisites Access to the cluster as a user with the cluster-admin role. A cluster installed on bare metal hardware. A node with podman and OpenShift CLI ( oc ) installed. Procedure Check the machine config pool: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h Use Podman to authenticate to registry.redhat.io : USD podman login registry.redhat.io Username: myrhusername Password: ************ Optional: Display help for the PPC tool: USD podman run --entrypoint performance-profile-creator registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9 -h Example output A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default "log") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default "must-gather") --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default "default") --profile-name string Name of the performance profile to be created (default "performance") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default "restricted") --user-level-networking Run with User level Networking(DPDK) enabled Run the Performance Profile Creator tool in discovery mode: Note Discovery mode inspects your cluster using the output from must-gather . The output produced includes information on: The NUMA cell partitioning with the allocated CPU ids Whether hyperthreading is enabled Using this information you can set appropriate values for some of the arguments supplied to the Performance Profile Creator tool. USD podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9 --info log --must-gather-dir-path /must-gather Note This command uses the performance profile creator as a new entry point to podman . It maps the must-gather data for the host into the container image and invokes the required user-supplied profile arguments to produce the my-performance-profile.yaml file. The -v option can be the path to either: The must-gather output directory An existing directory containing the must-gather decompressed tarball The info option requires a value which specifies the output format. Possible values are log and JSON. The JSON format is reserved for debugging. Run podman : USD podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency > my-performance-profile.yaml Note The Performance Profile Creator arguments are shown in the Performance Profile Creator arguments table. The following arguments are required: reserved-cpu-count mcp-name rt-kernel The mcp-name argument in this example is set to worker-cnf based on the output of the command oc get mcp . For single-node OpenShift use --mcp-name=master . Review the created YAML file: USD cat my-performance-profile.yaml Example output apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - intel_idle.max_cstate=0 - idle=poll cpu: isolated: 1,3,5,7,9,11,13,15,17,19-39,41,43,45,47,49,51,53,55,57,59-79 reserved: 0,2,4,6,8,10,12,14,16,18,40,42,44,46,48,50,52,54,56,58 nodeSelector: node-role.kubernetes.io/worker-cnf: "" numa: topologyPolicy: single-numa-node realTimeKernel: enabled: true Apply the generated profile: Note Install the Performance Addon Operator before applying the profile. USD oc apply -f my-performance-profile.yaml 16.1.2.1. How to run podman to create a performance profile The following example illustrates how to run podman to create a performance profile with 20 reserved CPUs that are to be split across the NUMA nodes. Node hardware configuration: 80 CPUs Hyperthreading enabled Two NUMA nodes Even numbered CPUs run on NUMA node 0 and odd numbered CPUs run on NUMA node 1 Run podman to create the performance profile: USD podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=true --must-gather-dir-path /must-gather > my-performance-profile.yaml The created profile is described in the following YAML: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 10-39,50-79 reserved: 0-9,40-49 nodeSelector: node-role.kubernetes.io/worker-cnf: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true Note In this case, 10 CPUs are reserved on NUMA node 0 and 10 are reserved on NUMA node 1. 16.1.3. Running the Performance Profile Creator wrapper script The performance profile wrapper script simplifies the running of the Performance Profile Creator (PPC) tool. It hides the complexities associated with running podman and specifying the mapping directories and it enables the creation of the performance profile. Prerequisites Access to the Performance Addon Operator image. Access to the must-gather tarball. Procedure Create a file on your local machine named, for example, run-perf-profile-creator.sh : USD vi run-perf-profile-creator.sh Paste the following code into the file: #!/bin/bash readonly CONTAINER_RUNTIME=USD{CONTAINER_RUNTIME:-podman} readonly CURRENT_SCRIPT=USD(basename "USD0") readonly CMD="USD{CONTAINER_RUNTIME} run --entrypoint performance-profile-creator" readonly IMG_EXISTS_CMD="USD{CONTAINER_RUNTIME} image exists" readonly IMG_PULL_CMD="USD{CONTAINER_RUNTIME} image pull" readonly MUST_GATHER_VOL="/must-gather" PAO_IMG="registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9" MG_TARBALL="" DATA_DIR="" usage() { print "Wrapper usage:" print " USD{CURRENT_SCRIPT} [-h] [-p image][-t path] -- [performance-profile-creator flags]" print "" print "Options:" print " -h help for USD{CURRENT_SCRIPT}" print " -p Performance Addon Operator image" print " -t path to a must-gather tarball" USD{IMG_EXISTS_CMD} "USD{PAO_IMG}" && USD{CMD} "USD{PAO_IMG}" -h } function cleanup { [ -d "USD{DATA_DIR}" ] && rm -rf "USD{DATA_DIR}" } trap cleanup EXIT exit_error() { print "error: USD*" usage exit 1 } print() { echo "USD*" >&2 } check_requirements() { USD{IMG_EXISTS_CMD} "USD{PAO_IMG}" || USD{IMG_PULL_CMD} "USD{PAO_IMG}" || \ exit_error "Performance Addon Operator image not found" [ -n "USD{MG_TARBALL}" ] || exit_error "Must-gather tarball file path is mandatory" [ -f "USD{MG_TARBALL}" ] || exit_error "Must-gather tarball file not found" DATA_DIR=USD(mktemp -d -t "USD{CURRENT_SCRIPT}XXXX") || exit_error "Cannot create the data directory" tar -zxf "USD{MG_TARBALL}" --directory "USD{DATA_DIR}" || exit_error "Cannot decompress the must-gather tarball" chmod a+rx "USD{DATA_DIR}" return 0 } main() { while getopts ':hp:t:' OPT; do case "USD{OPT}" in h) usage exit 0 ;; p) PAO_IMG="USD{OPTARG}" ;; t) MG_TARBALL="USD{OPTARG}" ;; ?) exit_error "invalid argument: USD{OPTARG}" ;; esac done shift USD((OPTIND - 1)) check_requirements || exit 1 USD{CMD} -v "USD{DATA_DIR}:USD{MUST_GATHER_VOL}:z" "USD{PAO_IMG}" "USD@" --must-gather-dir-path "USD{MUST_GATHER_VOL}" echo "" 1>&2 } main "USD@" Add execute permissions for everyone on this script: USD chmod a+x run-perf-profile-creator.sh Optional: Display the run-perf-profile-creator.sh command usage: USD ./run-perf-profile-creator.sh -h Expected output Wrapper usage: run-perf-profile-creator.sh [-h] [-p image][-t path] -- [performance-profile-creator flags] Options: -h help for run-perf-profile-creator.sh -p Performance Addon Operator image 1 -t path to a must-gather tarball 2 A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default "log") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default "must-gather") --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default "default") --profile-name string Name of the performance profile to be created (default "performance") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default "restricted") --user-level-networking Run with User level Networking(DPDK) enabled Note There two types of arguments: Wrapper arguments namely -h , -p and -t PPC arguments 1 Optional: Specify the Performance Addon Operator image. If not set, the default upstream image is used: registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9 . 2 -t is a required wrapper script argument and specifies the path to a must-gather tarball. Run the performance profile creator tool in discovery mode: Note Discovery mode inspects your cluster using the output from must-gather . The output produced includes information on: The NUMA cell partitioning with the allocated CPU IDs Whether hyperthreading is enabled Using this information you can set appropriate values for some of the arguments supplied to the Performance Profile Creator tool. USD ./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --info=log Note The info option requires a value which specifies the output format. Possible values are log and JSON. The JSON format is reserved for debugging. Check the machine config pool: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h Create a performance profile: USD ./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=2 --rt-kernel=true > my-performance-profile.yaml Note The Performance Profile Creator arguments are shown in the Performance Profile Creator arguments table. The following arguments are required: reserved-cpu-count mcp-name rt-kernel The mcp-name argument in this example is set to worker-cnf based on the output of the command oc get mcp . For single-node OpenShift use --mcp-name=master . Review the created YAML file: USD cat my-performance-profile.yaml Example output apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 1-39,41-79 reserved: 0,40 nodeSelector: node-role.kubernetes.io/worker-cnf: "" numa: topologyPolicy: restricted realTimeKernel: enabled: false Apply the generated profile: Note Install the Performance Addon Operator before applying the profile. USD oc apply -f my-performance-profile.yaml 16.1.4. Performance Profile Creator arguments Table 16.1. Performance Profile Creator arguments Argument Description disable-ht Disable hyperthreading. Possible values: true or false . Default: false . Warning If this argument is set to true you should not disable hyperthreading in the BIOS. Disabling hyperthreading is accomplished with a kernel command line argument. info This captures cluster information and is used in discovery mode only. Discovery mode also requires the must-gather-dir-path argument. If any other arguments are set they are ignored. Possible values: log JSON Note These options define the output format with the JSON format being reserved for debugging. Default: log . mcp-name MCP name for example worker-cnf corresponding to the target machines. This parameter is required. must-gather-dir-path Must gather directory path. This parameter is required. When the user runs the tool with the wrapper script must-gather is supplied by the script itself and the user must not specify it. power-consumption-mode The power consumption mode. Possible values: default low-latency ultra-low-latency Default: default . profile-name Name of the performance profile to create. Default: performance . reserved-cpu-count Number of reserved CPUs. This parameter is required. Note This must be a natural number. A value of 0 is not allowed. rt-kernel Enable real-time kernel. This parameter is required. Possible values: true or false . split-reserved-cpus-across-numa Split the reserved CPUs across NUMA nodes. Possible values: true or false . Default: false . topology-manager-policy Kubelet Topology Manager policy of the performance profile to be created. Possible values: single-numa-node best-effort restricted Default: restricted . user-level-networking Run with user level networking (DPDK) enabled. Possible values: true or false . Default: false . 16.2. Additional resources For more information about the must-gather tool, see Gathering data about your cluster . | [
"oc adm must-gather --image=<PAO_image> --dest-dir=<dir>",
"oc adm must-gather --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.9 --dest-dir=must-gather",
"tar cvaf must-gather.tar.gz must-gather/",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h",
"podman login registry.redhat.io",
"Username: myrhusername Password: ************",
"podman run --entrypoint performance-profile-creator registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9 -h",
"A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled",
"podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9 --info log --must-gather-dir-path /must-gather",
"podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency > my-performance-profile.yaml",
"cat my-performance-profile.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - intel_idle.max_cstate=0 - idle=poll cpu: isolated: 1,3,5,7,9,11,13,15,17,19-39,41,43,45,47,49,51,53,55,57,59-79 reserved: 0,2,4,6,8,10,12,14,16,18,40,42,44,46,48,50,52,54,56,58 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: single-numa-node realTimeKernel: enabled: true",
"oc apply -f my-performance-profile.yaml",
"podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=true --must-gather-dir-path /must-gather > my-performance-profile.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 10-39,50-79 reserved: 0-9,40-49 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true",
"vi run-perf-profile-creator.sh",
"#!/bin/bash readonly CONTAINER_RUNTIME=USD{CONTAINER_RUNTIME:-podman} readonly CURRENT_SCRIPT=USD(basename \"USD0\") readonly CMD=\"USD{CONTAINER_RUNTIME} run --entrypoint performance-profile-creator\" readonly IMG_EXISTS_CMD=\"USD{CONTAINER_RUNTIME} image exists\" readonly IMG_PULL_CMD=\"USD{CONTAINER_RUNTIME} image pull\" readonly MUST_GATHER_VOL=\"/must-gather\" PAO_IMG=\"registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.9\" MG_TARBALL=\"\" DATA_DIR=\"\" usage() { print \"Wrapper usage:\" print \" USD{CURRENT_SCRIPT} [-h] [-p image][-t path] -- [performance-profile-creator flags]\" print \"\" print \"Options:\" print \" -h help for USD{CURRENT_SCRIPT}\" print \" -p Performance Addon Operator image\" print \" -t path to a must-gather tarball\" USD{IMG_EXISTS_CMD} \"USD{PAO_IMG}\" && USD{CMD} \"USD{PAO_IMG}\" -h } function cleanup { [ -d \"USD{DATA_DIR}\" ] && rm -rf \"USD{DATA_DIR}\" } trap cleanup EXIT exit_error() { print \"error: USD*\" usage exit 1 } print() { echo \"USD*\" >&2 } check_requirements() { USD{IMG_EXISTS_CMD} \"USD{PAO_IMG}\" || USD{IMG_PULL_CMD} \"USD{PAO_IMG}\" || exit_error \"Performance Addon Operator image not found\" [ -n \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file path is mandatory\" [ -f \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file not found\" DATA_DIR=USD(mktemp -d -t \"USD{CURRENT_SCRIPT}XXXX\") || exit_error \"Cannot create the data directory\" tar -zxf \"USD{MG_TARBALL}\" --directory \"USD{DATA_DIR}\" || exit_error \"Cannot decompress the must-gather tarball\" chmod a+rx \"USD{DATA_DIR}\" return 0 } main() { while getopts ':hp:t:' OPT; do case \"USD{OPT}\" in h) usage exit 0 ;; p) PAO_IMG=\"USD{OPTARG}\" ;; t) MG_TARBALL=\"USD{OPTARG}\" ;; ?) exit_error \"invalid argument: USD{OPTARG}\" ;; esac done shift USD((OPTIND - 1)) check_requirements || exit 1 USD{CMD} -v \"USD{DATA_DIR}:USD{MUST_GATHER_VOL}:z\" \"USD{PAO_IMG}\" \"USD@\" --must-gather-dir-path \"USD{MUST_GATHER_VOL}\" echo \"\" 1>&2 } main \"USD@\"",
"chmod a+x run-perf-profile-creator.sh",
"./run-perf-profile-creator.sh -h",
"Wrapper usage: run-perf-profile-creator.sh [-h] [-p image][-t path] -- [performance-profile-creator flags] Options: -h help for run-perf-profile-creator.sh -p Performance Addon Operator image 1 -t path to a must-gather tarball 2 A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled",
"./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --info=log",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h",
"./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=2 --rt-kernel=true > my-performance-profile.yaml",
"cat my-performance-profile.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 1-39,41-79 reserved: 0,40 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: false",
"oc apply -f my-performance-profile.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/scalability_and_performance/cnf-create-performance-profiles |
15.5. Additional Resources | 15.5. Additional Resources RPM is an extremely complex utility with many options and methods for querying, installing, upgrading, and removing packages. Refer to the following resources to learn more about RPM. 15.5.1. Installed Documentation rpm --help - This command displays a quick reference of RPM parameters. man rpm - The RPM man page gives more detail about RPM parameters than the rpm --help command. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Package_Management_with_RPM-Additional_Resources |
3.9. Suspending Activity on a GFS2 File System | 3.9. Suspending Activity on a GFS2 File System You can suspend write activity to a file system by using the dmsetup suspend command. Suspending write activity allows hardware-based device snapshots to be used to capture the file system in a consistent state. The dmsetup resume command ends the suspension. Usage Start Suspension End Suspension MountPoint Specifies the file system. Examples This example suspends writes to file system /mygfs2 . This example ends suspension of writes to file system /mygfs2 . | [
"dmsetup suspend MountPoint",
"dmsetup resume MountPoint",
"dmsetup suspend /mygfs2",
"dmsetup resume /mygfs2"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/s1-manage-suspendfs |
9.6. Import From XML Data File Source | 9.6. Import From XML Data File Source JBoss Data Virtualization supports XML Files as data sources. You can import from these data sources and create the metamodels required to query your data in minutes. Using the steps below you will define your XML data source, configure your parsing parameters for the XML data file, generate a source model containing the required Teiid procedure and create a view table containing the SQL defining the column data in your XML data file. As with Teiid Designer's JDBC, Salesforce and WSDL importers, the XML File importer is based on utilizing a specific Data Tools Connection Profile. The results of the importer will include a source model containing the getTextFiles() procedure or invokeHTTP() procedure which are both supported by JBoss Data Virtualization. The importer will also create a new view model containing a view table for your selected XML source file. Within the view table will be generated SQL transformation containing the getTextFiles() procedure from your source model as well as the column definitions and parameters required for the Teiid XMLTABLE() function used to query the data file. You can also choose to update an existing view model instead of creating a new view model. The XMLTABLE function uses XQuery to produce tabular output. The XMLTABLE function is implicitly a nested table and may be correlated to preceding FROM clause entries. XMLTABLE is part of the SQL/XML 2006 specification. XMLTABLE([<NSP>,] xquery-expression [<PASSING>] [COLUMNS <COLUMN>, ... )] AS name COLUMN := name (FOR ORDINALITY | (datatype [DEFAULT expression] [PATH string])) Teiid Designer will construct the full SQL statement for each view table in the form: SELECT A.entryDate AS entryDate, A.internalAudit AS internalAudit FROM (EXEC CCC.getTextFiles('sample.xml')) AS f, XMLTABLE(XMLNAMESPACES('http://www.kaptest.com/schema/1.0/party' AS pty), '/pty:students/student' PASSING XMLPARSE(DOCUMENT f.file) COLUMNS entryDate FOR ORDINALITY, internalAudit string PATH '/internalAudit') AS A To import from your XML data file source follow the steps below. In Model Explorer, right-click and then click Import... or click the File > Import... action in the toolbar or select a project, folder or model in the tree and click Import... Select the import option Teiid Designer > File Source (XML) >> Source and View Model and click > Figure 9.19. Import from XML File Source The page of the wizard allows selection of the XML Import mode that specifies whether the XML file is local or remote. The description at the top describes what operations this wizard will perform. Select either the XML file on local file system or XML file via remote URL and click > Figure 9.20. XML Import File Options Page Select existing or connection profile from the drop-down selector or press New... button to launch the New Connection Profile dialog or Edit... to modify/change an existing connection profile prior to selection. After selecting a Connection Profile, the XML data file from the connection profile will be displayed in the Data File Name panel. Select the data file you wish to process. The data from this file, along with your custom import options, will be used to construct a view table containing the required SQL transformation for retrieving your data and returning a result set. Lastly enter the unique source model name in the Source Model Definition section at the bottom of the page or select an existing source model using the Browse button. Note the Model Status section which will indicate the validity of the model name, whether the model exists or not and whether the model already contains the getTextFiles() procedure. In this case, the source model nor the procedure will be generated. When finished with this page, click > . Figure 9.21. XML Data File Source Selection Page On the page enter the JNDI name and click > . The primary purpose of this importer is to help you create a view table containing the transformation required to query the user defined data file. This page presents a number of options you can use to customize the Generated SQL Statement, shown in the bottom panel. The to panel contains an XML tree view of your file contents and actions/buttons you can use to create column entries displayed in the middle, Column Information panel. To create columns, select a root XML element and right-click select Set as root path action. This populates the root path value. , select columns in the tree that you wish to include on your query. You can modify or create custom columns, by using the ADD , DELETE , UP , DOWN to manage the column info in your SQL. Note that the Path property value for a column is the selected element's path relative to the defined root path. If no root path is defined all paths are absolute. Each column entry requires a datatype and an optional default value. See the Development Guide Volume 3: Reference Material for details on the XMLTABLE() function. When finished with this page, click > . Figure 9.22. XML File Delimited Columns Options Page On the View Model Definition page, select the target folder location where your new view model will be created. You can also select an existing model for your new view tables. Note the Model Status section which will indicate the validity of the model name, whether the model exists or not. Lastly, enter a unique, valid view table name. Click Finish to generate your models and finish the wizard. Figure 9.23. View Model Definition Page | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/import_from_xml_data_file_source |
18.10. Defining Permissions | 18.10. Defining Permissions Permission rules define the rights that are associated with the ACI and whether access is allowed or denied. In an ACI, the following highlighted part is the permission rule: Syntax The general syntax of a permission rule is: permission : Sets if the ACI allows or denies permission. rights : Sets the rights which the ACI allows or denies. See Section 18.10.1, "User rights" . Example 18.11. Defining Permissions To enable users stored in the ou=People,dc=example,dc=com entry to search and display all attributes in their own entry: 18.10.1. User rights The rights in a permission rule define what operations are granted or denied. In an ACI, you can set one or multiple of the following rights: Table 18.1. User Rights Right Description read Sets whether users can read directory data. This permission applies only to search operations in LDAP. write Sets whether users can modify an entry by adding, modifying, or deleting attributes. This permission applies to the modify and modrdn operations in LDAP. add Sets whether users can create an entry. This permission applies only to the add operation in LDAP. delete Sets whether users can delete an entry. This permission applies only to the delete operation in LDAP. search Sets whether users can search for directory data. To view data returned as part of a search result, assign search and read rights. This permission applies only to search operations in LDAP. compare Sets whether the users can compare data they supply with data stored in the directory. With compare rights, the directory returns a success or failure message in response to an inquiry, but the user cannot see the value of the entry or attribute. This permission applies only to the compare operation in LDAP. selfwrite Sets whether users can add or delete their own DN from a group. This right is used only for group management. proxy Sets whether the specified DN can access the target with the rights of another entry. The proxy right is granted within the scope of the ACL, and the user or group who as the right granted can run commands as any Directory Server user. You cannot restrict the proxy rights to certain users. For security reasons, set ACIs that use the proxy right at the most targeted level of the directory. all Sets all of the rights, except proxy . 18.10.2. Rights Required for LDAP Operations This section describes the rights you must grant to users depending on the type of LDAP operation you want to authorize them to perform. Adding an entry: Grant add permission on the entry that you want to add. Grant write permission on the value of each attribute in the entry. This right is granted by default but can be restricted using the targattrfilters keyword. Deleting an entry: Grant delete permission on the entry that you want to delete. Grant write permission on the value of each attribute in the entry. This right is granted by default but can be restricted using the targattrfilters keyword. Modifying an attribute in an entry: Grant write permission on the attribute type. Grant write permission on the value of each attribute type. This right is granted by default but can be restricted using the targattrfilters keyword. Modifying the RDN of an entry: Grant write permission on the entry. Grant write permission on the attribute type that is used in the new RDN. Grant write permission on the attribute type that is used in the old RDN, if you want to grant the right to delete the old RDN. Grant write permission on the value of attribute type that is used in the new RDN. This right is granted by default but can be restricted using the targattrfilters keyword. Comparing the value of an attribute: Grant compare permission on the attribute type. Searching for entries: Grant search permission on each attribute type used in the search filter. Grant read permission on attribute types used in the entry. 18.10.3. Access Control and the modrdn Operation To explicitly deny modrdn operations using ACIs, target the relevant entries but omit the targetattr keyword. For example, to add an ACI that defines the cn=example,ou=Groups,dc=example,dc=com group, cannot rename entries in ou=people,dc=example,dc=com which contain the cn attribute: | [
"( target_rule ) (version 3.0; acl \" ACL_name \"; permission_rule bind_rules ;)",
"permission ( rights )",
"ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: ou=People,dc=example,dc=com changetype: modify add: aci aci: (target = \"ldap:///ou=People,dc=example,dc=com\") (version 3.0; acl \"Allow users to read and search attributes of own entry\"; allow (search, read) (userdn = \"ldap:///self\");)",
"ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: dc=example,dc=com changetype: modify add: aci aci: (target=\"ldap:///cn=*,ou=people,dc=example,dc=com\") (version 3.0; acl \"Deny modrdn rights to the example group\"; deny(write) groupdn=\"ldap:///cn= example ,ou=groups,dc=example,dc=com\";)"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/defining_permissions |
Chapter 2. LVM Components | Chapter 2. LVM Components This chapter describes the components of an LVM Logical volume. 2.1. Physical Volumes The underlying physical storage unit of an LVM logical volume is a block device such as a partition or whole disk. To use the device for an LVM logical volume the device must be initialized as a physical volume (PV). Initializing a block device as a physical volume places a label near the start of the device. By default, the LVM label is placed in the second 512-byte sector. You can overwrite this default by placing the label on any of the first 4 sectors. This allows LVM volumes to co-exist with other users of these sectors, if necessary. An LVM label provides correct identification and device ordering for a physical device, since devices can come up in any order when the system is booted. An LVM label remains persistent across reboots and throughout a cluster. The LVM label identifies the device as an LVM physical volume. It contains a random unique identifier (the UUID) for the physical volume. It also stores the size of the block device in bytes, and it records where the LVM metadata will be stored on the device. The LVM metadata contains the configuration details of the LVM volume groups on your system. By default, an identical copy of the metadata is maintained in every metadata area in every physical volume within the volume group. LVM metadata is small and stored as ASCII. Currently LVM allows you to store 0, 1 or 2 identical copies of its metadata on each physical volume. The default is 1 copy. Once you configure the number of metadata copies on the physical volume, you cannot change that number at a later time. The first copy is stored at the start of the device, shortly after the label. If there is a second copy, it is placed at the end of the device. If you accidentally overwrite the area at the beginning of your disk by writing to a different disk than you intend, a second copy of the metadata at the end of the device will allow you to recover the metadata. For detailed information about the LVM metadata and changing the metadata parameters, see Appendix D, LVM Volume Group Metadata . 2.1.1. LVM Physical Volume Layout Figure 2.1, "Physical Volume layout" shows the layout of an LVM physical volume. The LVM label is on the second sector, followed by the metadata area, followed by the usable space on the device. Note In the Linux kernel (and throughout this document), sectors are considered to be 512 bytes in size. Figure 2.1. Physical Volume layout 2.1.2. Multiple Partitions on a Disk LVM allows you to create physical volumes out of disk partitions. It is generally recommended that you create a single partition that covers the whole disk to label as an LVM physical volume for the following reasons: Administrative convenience It is easier to keep track of the hardware in a system if each real disk only appears once. This becomes particularly true if a disk fails. In addition, multiple physical volumes on a single disk may cause a kernel warning about unknown partition types at boot-up. Striping performance LVM can not tell that two physical volumes are on the same physical disk. If you create a striped logical volume when two physical volumes are on the same physical disk, the stripes could be on different partitions on the same disk. This would result in a decrease in performance rather than an increase. Although it it is not recommended, there may be specific circumstances when you will need to divide a disk into separate LVM physical volumes. For example, on a system with few disks it may be necessary to move data around partitions when you are migrating an existing system to LVM volumes. Additionally, if you have a very large disk and want to have more than one volume group for administrative purposes then it is necessary to partition the disk. If you do have a disk with more than one partition and both of those partitions are in the same volume group, take care to specify which partitions are to be included in a logical volume when creating striped volumes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/lvm_components |
19.4. Virtual Machine Graphical Console | 19.4. Virtual Machine Graphical Console This window displays a guest's graphical console. Guests can use several different protocols to export their graphical frame buffers: virt-manager supports VNC and SPICE . If your virtual machine is set to require authentication, the Virtual Machine graphical console prompts you for a password before the display appears. Figure 19.9. Graphical console window Note VNC is considered insecure by many security experts, however, several changes have been made to enable the secure usage of VNC for virtualization on Red Hat enterprise Linux. The guest machines only listen to the local host's loopback address ( 127.0.0.1 ). This ensures only those with shell privileges on the host can access virt-manager and the virtual machine through VNC. Although virt-manager is configured to listen to other public network interfaces and alternative methods can be configured, it is not recommended. Remote administration can be performed by tunneling over SSH which encrypts the traffic. Although VNC can be configured to access remotely without tunneling over SSH, for security reasons, it is not recommended. To remotely administer the guest follow the instructions in: Chapter 18, Remote Management of Guests . TLS can provide enterprise level security for managing guest and host systems. Your local desktop can intercept key combinations (for example, Ctrl+Alt+F1) to prevent them from being sent to the guest machine. You can use the Send key menu option to send these sequences. From the guest machine window, click the Send key menu and select the key sequence to send. In addition, from this menu you can also capture the screen output. SPICE is an alternative to VNC available for Red Hat Enterprise Linux. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guests_with_the_virtual_machine_manager_virt_manager-virtual_machine_graphical_console |
Chapter 7. Migrating existing content | Chapter 7. Migrating existing content Use the following sections learn how to use the awx-manage command to assist with additional steps in the migration process once you have upgraded to Red Hat Ansible Automation Platform 2.0 and automation controller 4.0. Additionally, learn more about migrating between versions of Ansible. 7.1. Migrating virtual envs to automation execution environments Use the following sections to assist with additional steps in the migration process once you have upgraded to Red Hat Ansible Automation Platform 2.0 and automation controller 4.0. 7.1.1. Listing custom virtual environments You can list the virtual environments on your automation controller instance using the awx-manage command. Procedure SSH into your automation controller instance and run: USD awx-manage list_custom_venvs A list of discovered virtual environments will appear. # Discovered virtual environments: /var/lib/awx/venv/testing /var/lib/venv/new_env To export the contents of a virtual environment, re-run while supplying the path as an argument: awx-manage export_custom_venv /path/to/venv 7.1.2. Viewing objects associated with a custom virtual environment View the organizations, jobs, and inventory sources associated with a custom virtual environment using the awx-manage command. Procedure SSH into your automation controller instance and run: USD awx-manage custom_venv_associations /path/to/venv A list of associated objects will appear. inventory_sources: - id: 15 name: celery job_templates: - id: 9 name: Demo Job Template @ 2:40:47 PM - id: 13 name: elephant organizations - id: 3 name: alternating_bongo_meow - id: 1 name: Default projects: [] 7.1.3. Selecting the custom virtual environment to export Select the custom virtual environment you wish to export using awx-manage export_custom_venv command. Procedure SSH into your automation controller instance and run: USD awx-manage export_custom_venv /path/to/venv The output from this command will show a pip freeze of what is in the specified virtual environment. This information can be copied into a requirements.txt file for Ansible Builder to use for creating a new automation execution environments image numpy==1.20.2 pandas==1.2.4 python-dateutil==2.8.1 pytz==2021.1 six==1.16.0 To list all available custom virtual environments run: awx-manage list_custom_venvs Note Pass the -q flag when running awx-manage list_custom_venvs to reduce output. 7.2. Migrating between Ansible Core versions Migrating between versions of Ansible Core requires you to update your playbooks, plugins and other parts of your Ansible infrastructure to ensure they work with the latest version. This process requires that changes are validated against the updates made to each successive version of Ansible Core. If you intend to migrate from Ansible 2.9 to Ansible 2.11, you first need to verify that you meet the requirements of Ansible 2.10, and from there make updates to 2.11. 7.2.1. Ansible Porting Guides The Ansible Porting Guide is a series of documents that provide information on the behavioral changes between consecutive Ansible versions. Refer to the guides when migrating from version of Ansible to a newer version. 7.2.2. Additional resources Refer to the Ansible 2.9 for behavior changes between Ansible 2.8 and Ansible 2.9. Refer to the Ansible 2.10 for behavior changes between Ansible 2.9 and Ansible 2.10. | [
"awx-manage list_custom_venvs",
"Discovered virtual environments: /var/lib/awx/venv/testing /var/lib/venv/new_env To export the contents of a virtual environment, re-run while supplying the path as an argument: awx-manage export_custom_venv /path/to/venv",
"awx-manage custom_venv_associations /path/to/venv",
"inventory_sources: - id: 15 name: celery job_templates: - id: 9 name: Demo Job Template @ 2:40:47 PM - id: 13 name: elephant organizations - id: 3 name: alternating_bongo_meow - id: 1 name: Default projects: []",
"awx-manage export_custom_venv /path/to/venv",
"numpy==1.20.2 pandas==1.2.4 python-dateutil==2.8.1 pytz==2021.1 six==1.16.0 To list all available custom virtual environments run: awx-manage list_custom_venvs"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_creator_guide/migrating-existing-content |
Chapter 3. Deploy using local storage devices | Chapter 3. Deploy using local storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Use this section to deploy OpenShift Data Foundation on VMware where OpenShift Container Platform is already installed. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the steps. Installing Local Storage Operator Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.3. Creating OpenShift Data Foundation cluster on VMware vSphere VMware vSphere supports the following three types of local storage: Virtual machine disk (VMDK) Raw device mapping (RDM) VMDirectPath I/O Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have a minimum of three worker nodes with the same storage type and size attached to each node to use local storage devices on VMware. Ensure that the disk type is SSD, which is the only supported disk type. For VMs on VMware vSphere, ensure the disk.EnableUUID option is set to TRUE . You need to have vCenter account privileges to configure the VMs. For more information, see Required vCenter account privileges . To set the disk.EnableUUID option, use the Advanced option of the VM Options in the Customize hardware tab. For more information, see Installing on vSphere . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Create a new StorageClass using the local storage devices option. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Select one of the following: Disks on all nodes to use the available disks that match the selected filters on all nodes. Disks on selected nodes to use the available disks that match the selected filters only on selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with 3 or more nodes is spread across fewer than the minimum requirement of 3 availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one of the following Encryption level : Cluster-wide encryption to encrypt the entire cluster (block and file). StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that the Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled): In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, the flexible scaling feature is enabled. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide. | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"spec: flexibleScaling: true [...] status: failureDomain: host"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_on_vmware_vsphere/deploy-using-local-storage-devices-vmware |
Chapter 10. Removing Windows nodes | Chapter 10. Removing Windows nodes You can remove a Windows node by deleting its host Windows machine. 10.1. Deleting a specific machine You can delete a specific machine. Important Do not delete a control plane machine unless your cluster uses a control plane machine set. Prerequisites Install an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure View the machines that are in the cluster by running the following command: USD oc get machine -n openshift-machine-api The command output contains a list of machines in the <clusterid>-<role>-<cloud_region> format. Identify the machine that you want to delete. Delete the machine by running the following command: USD oc delete machine <machine> -n openshift-machine-api Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. If the machine that you delete belongs to a machine set, a new machine is immediately created to satisfy the specified number of replicas. | [
"oc get machine -n openshift-machine-api",
"oc delete machine <machine> -n openshift-machine-api"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/windows_container_support_for_openshift/removing-windows-nodes |
15.3. Moving Swap Space | 15.3. Moving Swap Space To move swap space from one location to another, follow the steps for removing swap space, and then follow the steps for adding swap space. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s1-swap-moving |
Chapter 5. Tuning Transaction Logging | Chapter 5. Tuning Transaction Logging Every Directory Server contains a transaction log which writes operations for all the databases it manages. Whenever a directory database operation such as a modify is performed, the server creates a single database transaction for all of the database operations invoked as a result of that LDAP operation. This includes both updating the entry data in the entry index file and updating all of the attribute indexes. If all of the operations succeed, the server commits the transaction, writes the operations to the transaction log, and verifies that the entire transaction is written to disk. If any of these operations fail, the server rolls back the transaction, and all of the operations are discarded. This all-or-nothing approach in the server guarantees that an update operation is atomic . Either the entire operation succeeds permanently and irrevocably, or it fails. Periodically, the Directory Server (through internal housekeeping threads) flushes the contents of the transaction logs to the actual database index files and checks if the transaction logs require trimming. If the server experiences a failure, such as a power outage, and shuts down abnormally, the information about recent directory changes is still saved by the transaction log. When the server restarts, the directory automatically detects the error condition and uses the database transaction log to recover the database. Although database transaction logging and database recovery are automatic processes that require no intervention, it can be advisable to tune some of the database transaction logging attributes to optimize performance. Warning The transaction logging attributes are provided only for system modifications and diagnostics. These settings should be changed only with the guidance of Red Hat Technical Support. Setting these attributes and other configuration attributes inconsistently may cause the directory to be unstable. 5.1. Moving the Database Directory to a Separate Disk or Partition To achieve higher performance, store the directory server databases and transaction log on a fast drive, such as a nonvolatile memory express (NVMe) drive or an SSD. For example, if you already run a Directory Server instance and want to mount the /dev/nvme0n1p1 partition to the /var/lib/dirsrv/slapd- instance_name /db/ directory: Stop the instance: Mount the /dev/nvme0n1p1 partition to a temporary directory. For example: Copy the content of the /var/lib/dirsrv/slapd- instance_name /db/ directory to the temporary mount point: Unmount the temporary directory: If /var/lib/dirsrv/slapd- instance_name /db/ is also a separate mount point, unmount the directory: Update the /etc/fstab file to mount the /dev/nvme0n1p1 partition automatically to /var/lib/dirsrv/slapd- instance_name /db/ when the system boots. For details, see the corresponding section in the Red Hat System Administrator's Guide . Mount the file system. If you added the entry to /etc/fstab : If SELinux is running in enforcing mode, restore the SELinux context: Start the instance: | [
"systemctl stop dirsrv@ instance_name",
"mount /dev/nvme0n1p1 /mnt/",
"mv /var/lib/dirsrv/slapd- instance_name /db/* /mnt/",
"umount /mnt/",
"umount /var/lib/dirsrv/slapd- instance_name /db/",
"mount /var/lib/dirsrv/slapd- instance_name /db/",
"restorecon -Rv /var/lib/dirsrv/slapd- instance_name /db/",
"systemctl start dirsrv@ instance_name"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/tuning_database_performance-tuning_transaction_logging |
Chapter 12. Installing on OpenStack | Chapter 12. Installing on OpenStack 12.1. Installing a cluster on OpenStack with customizations In OpenShift Container Platform version 4.7, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP). To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 12.1.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Verify that OpenShift Container Platform 4.7 is compatible with your RHOSP version by using the "Supported platforms for OpenShift clusters" section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . Verify that your network configuration does not rely on a provider network. Provider networks are not supported. Have a storage service installed in RHOSP, like block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . Have metadata service enabled in RHOSP 12.1.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 12.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 12.1.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory, 4 vCPUs, and 100 GB storage space 12.1.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory, 2 vCPUs, and 100 GB storage space Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 12.1.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory, 4 vCPUs, and 100 GB storage space 12.1.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 12.1.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 12.1.5. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 12.1.6. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 12.1.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 12.1.8. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Installation configuration parameters section for more information about the available parameters. 12.1.8.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.1.9. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 12.1.9.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 12.2. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 12.1.9.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 12.3. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 12.1.9.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 12.4. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 12.1.9.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 12.5. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 12.1.9.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 12.6. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 12.1.9.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 12.1.9.7. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the install-config.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . The RHOSP network supports both VM and bare metal server attachment. Your network configuration does not rely on a provider network. Provider networks are not supported. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an install-config.yaml file as part of the OpenShift Container Platform installation process. Procedure In the install-config.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of controlPlane.platform.openstack.type to a bare metal flavor. Change the value of compute.platform.openstack.type to a bare metal flavor. If you want to deploy your machines on a pre-existing network, change the value of platform.openstack.machinesSubnet to the RHOSP subnet UUID of the network. Control plane and compute machines must use the same subnet. An example bare metal install-config.yaml file controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 ... compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 ... platform: openstack: machinesSubnet: <subnet_UUID> 3 ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. 3 If you want to use a pre-existing network, change this value to the UUID of the RHOSP subnet. Use the updated install-config.yaml file to complete the installation process. The compute machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: ./openshift-install wait-for install-complete --log-level debug 12.1.9.8. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 12.1.10. Setting compute machine affinity Optionally, you can set the affinity policy for compute machines during installation. The installer does not select an affinity policy for compute machines by default. You can also create machine sets that use particular RHOSP server groups after installation. Note Control plane machines are created with a soft-anti-affinity policy. Tip You can learn more about RHOSP instance scheduling and placement in the RHOSP documentation. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Using the RHOSP command-line interface, create a server group for your compute machines. For example: USD openstack \ --os-compute-api-version=2.15 \ server group create \ --policy anti-affinity \ my-openshift-worker-group For more information, see the server group create command documentation . Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir=<installation_directory> where: installation_directory Specifies the name of the directory that contains the install-config.yaml file for your cluster. Open manifests/99_openshift-cluster-api_worker-machineset-0.yaml , the MachineSet definition file. Add the property serverGroupID to the definition beneath the spec.template.spec.providerSpec.value property. For example: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_ID>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee 1 kind: OpenstackProviderSpec networks: - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_ID> securityGroups: - filter: {} name: <infrastructure_ID>-<node_role> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_ID> tags: - openshiftClusterID=<infrastructure_ID> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> 1 Add the UUID of your server group here. Optional: Back up the manifests/99_openshift-cluster-api_worker-machineset-0.yaml file. The installation program deletes the manifests/ directory when creating the cluster. When you install the cluster, the installer uses the MachineSet definition that you modified to create compute machines within your RHOSP server group. 12.1.11. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 12.1.12. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 12.1.12.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 12.1.12.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 12.1.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 12.1.14. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 12.1.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 12.1.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 12.1.17. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 12.2. Installing a cluster on OpenStack with Kuryr In OpenShift Container Platform version 4.7, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP) that uses Kuryr SDN. To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 12.2.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Verify that OpenShift Container Platform 4.7 is compatible with your RHOSP version by using the "Supported platforms for OpenShift clusters" section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . Verify that your network configuration does not rely on a provider network. Provider networks are not supported. Have a storage service installed in RHOSP, like block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . 12.2.2. About Kuryr SDN Kuryr is a container network interface (CNI) plug-in solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace: kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object. kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object. The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs. Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network. If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial. Kuryr is not recommended in deployments where all of the following criteria are true: The RHOSP version is less than 16. The deployment uses UDP services, or a large number of TCP services on few hypervisors. or The ovn-octavia Octavia driver is disabled. The deployment uses a large number of TCP services on few hypervisors. 12.2.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires. Use the following quota to satisfy a default cluster's minimum requirements: Table 12.7. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr Resource Value Floating IP addresses 3 - plus the expected number of Services of LoadBalancer type Ports 1500 - 1 needed per Pod Routers 1 Subnets 250 - 1 needed per Namespace/Project Networks 250 - 1 needed per Namespace/Project RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 250 - 1 needed per Service and per NetworkPolicy Security group rules 1000 Load balancers 100 - 1 needed per Service Load balancer listeners 500 - 1 needed per Service-exposed port Load balancer pools 500 - 1 needed per Service-exposed port A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Important If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects. Take the following notes into consideration when setting resources: The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time. Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group. Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota. If you are using RHOSP version 15 or earlier, or the ovn-octavia driver , each load balancer has a security group with the user project. The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment's size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them. If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. To enable Kuryr SDN, your environment must meet the following requirements: Run RHOSP 13+. Have Overcloud with Octavia. Use Neutron Trunk ports extension. Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid . 12.2.3.1. Increasing quota When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies. Procedure Increase the quotas for a project by running the following command: USD sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project> 12.2.3.2. Configuring Neutron Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work. In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies. 12.2.3.3. Configuring Octavia Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN. To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update. Note The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary. This example uses the local registry method. Procedure If you are using the local registry, create a template to upload the images to the registry. For example: (undercloud) USD openstack overcloud container image prepare \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=<local-ip-from-undercloud.conf>:8787 \ --prefix=openstack- \ --tag-from-label {version}-{product-version} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml Verify that the local_registry_images.yaml file contains the Octavia images. For example: ... - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787 Note The Octavia container versions vary depending upon the specific RHOSP release installed. Pull the container images from registry.redhat.io to the Undercloud node: (undercloud) USD sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose This may take some time depending on the speed of your network and Undercloud disk. Since an Octavia load balancer is used to access the OpenShift Container Platform API, you must increase their listeners' default timeouts for the connections. The default timeout is 50 seconds. Increase the timeout to 20 minutes by passing the following file to the Overcloud deploy command: (undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000 Note This is not needed for RHOSP 13.0.13+. Install or update your Overcloud environment with Octavia: USD openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ -e octavia_timeouts.yaml Note This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director . Note When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN. In RHOSP versions earlier than 13.0.13, add the project ID to the octavia.conf configuration file after you create the project. To enforce network policies across services, like when traffic goes through the Octavia load balancer, you must ensure Octavia creates the Amphora VM security groups on the user project. This change ensures that required load balancer security groups belong to that project, and that they can be updated to enforce services isolation. Note This task is unnecessary in RHOSP version 13.0.13 or later. Octavia implements a new ACL API that restricts access to the load balancers VIP. Get the project ID USD openstack project show <project> Example output +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+ Add the project ID to octavia.conf for the controllers. Source the stackrc file: USD source stackrc # Undercloud credentials List the Overcloud controllers: USD openstack server list Example output +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ SSH into the controller(s). USD ssh [email protected] Edit the octavia.conf file to add the project into the list of projects where Amphora security groups are on the user's account. Restart the Octavia worker so the new configuration loads. controller-0USD sudo docker restart octavia_worker Note Depending on your RHOSP environment, Octavia might not support UDP listeners. If you use Kuryr SDN on RHOSP version 13.0.13 or earlier, UDP services are not supported. RHOSP version 16 or later support UDP. 12.2.3.3.1. The Octavia OVN Driver Octavia supports multiple provider drivers through the Octavia API. To see all available Octavia provider drivers, on a command line, enter: USD openstack loadbalancer provider list Example output +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ Beginning with RHOSP version 16, the Octavia OVN provider driver ( ovn ) is supported on OpenShift Container Platform on RHOSP deployments. ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2. The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it. If Kuryr uses ovn instead of Amphora, it offers the following benefits: Decreased resource requirements. Kuryr does not require a load balancer VM for each service. Reduced network latency. Increased service creation speed by using OpenFlow rules instead of a VM for each service. Distributed load balancing actions across all nodes instead of centralized on Amphora VMs. You can configure your cluster to use the Octavia OVN driver after your RHOSP cloud is upgraded from version 13 to version 16. 12.2.3.4. Known limitations of installing with Kuryr Using OpenShift Container Platform with Kuryr SDN has several known limitations. RHOSP general limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments: Service objects with the NodePort type are not supported. Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods. If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer . Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting. RHOSP version limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version. RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources. Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP. Octavia RHOSP versions before 13.0.13 do not support UDP listeners. Therefore, OpenShift Container Platform UDP services are not supported. Octavia RHOSP versions before 13.0.13 cannot listen to multiple protocols on the same port. Services that expose the same port to different protocols, like TCP and UDP, are not supported. Kuryr SDN does not support automatic unidling by a service. RHOSP environment limitations There are limitations when using Kuryr SDN that depend on your deployment environment. Because of Octavia's lack of support for the UDP protocol and multiple listeners, if the RHOSP version is earlier than 13.0.13, Kuryr forces pods to use TCP for DNS resolution. In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only. In this case, the native Go resolver does not recognize the use-vc option in resolv.conf , which controls whether TCP is forced for DNS resolution. As a result, UDP is still used for DNS resolution, which fails. To ensure that TCP forcing is allowed, compile applications either with the environment variable CGO_ENABLED set to 1 , i.e. CGO_ENABLED=1 , or ensure that the variable is absent. In Go versions 1.13 and later, TCP is used automatically if DNS resolution using UDP fails. Note musl-based containers, including Alpine-based containers, do not support the use-vc option. RHOSP upgrade limitations As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required. You can address API changes on an individual basis. If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways: Upgrade each VM by triggering a load balancer failover . Leave responsibility for upgrading the VMs to users. If the operator takes the first option, there might be short downtimes during failovers. If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features. Important If OpenShift Container Platform detects a new Octavia version that supports UDP load balancing, it recreates the DNS service automatically. The service recreation ensures that the service default supports UDP load balancing. The recreation causes the DNS service approximately one minute of downtime. 12.2.3.5. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory, 4 vCPUs, and 100 GB storage space 12.2.3.6. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory, 2 vCPUs, and 100 GB storage space Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 12.2.3.7. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory, 4 vCPUs, and 100 GB storage space 12.2.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 12.2.5. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 12.2.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 12.2.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 12.2.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 12.2.9. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 12.2.9.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.2.10. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 12.2.10.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 12.8. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 12.2.10.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 12.9. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 12.2.10.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 12.10. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 12.2.10.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 12.11. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 12.2.10.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 12.12. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 12.2.10.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 12.2.10.7. Sample customized install-config.yaml file for RHOSP with Kuryr To deploy with Kuryr SDN instead of the default OpenShift SDN, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType and proceed with the default OpenShift Container Platform SDN installation steps. This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts. 2 3 Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services. 12.2.10.8. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add ports to the pool when it is created, such as when a new host is added, or a new namespace is created. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 12.2.10.9. Adjusting Kuryr ports pools during installation During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation. Prerequisites Create and modify the install-config.yaml file. Procedure From a command line, create the manifest files: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-network-03-config.yml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-network-* Example output cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5 1 Set the value of enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports after a namespace is created or a new node is added to the cluster. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . 5 The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia's LoadBalancers. If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork , and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork . The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter. If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1. Save the cluster-network-03-config.yml file, and exit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster. 12.2.11. Setting compute machine affinity Optionally, you can set the affinity policy for compute machines during installation. The installer does not select an affinity policy for compute machines by default. You can also create machine sets that use particular RHOSP server groups after installation. Note Control plane machines are created with a soft-anti-affinity policy. Tip You can learn more about RHOSP instance scheduling and placement in the RHOSP documentation. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Using the RHOSP command-line interface, create a server group for your compute machines. For example: USD openstack \ --os-compute-api-version=2.15 \ server group create \ --policy anti-affinity \ my-openshift-worker-group For more information, see the server group create command documentation . Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir=<installation_directory> where: installation_directory Specifies the name of the directory that contains the install-config.yaml file for your cluster. Open manifests/99_openshift-cluster-api_worker-machineset-0.yaml , the MachineSet definition file. Add the property serverGroupID to the definition beneath the spec.template.spec.providerSpec.value property. For example: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_ID>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee 1 kind: OpenstackProviderSpec networks: - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_ID> securityGroups: - filter: {} name: <infrastructure_ID>-<node_role> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_ID> tags: - openshiftClusterID=<infrastructure_ID> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> 1 Add the UUID of your server group here. Optional: Back up the manifests/99_openshift-cluster-api_worker-machineset-0.yaml file. The installation program deletes the manifests/ directory when creating the cluster. When you install the cluster, the installer uses the MachineSet definition that you modified to create compute machines within your RHOSP server group. 12.2.12. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 12.2.13. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 12.2.13.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 12.2.13.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 12.2.14. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 12.2.15. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 12.2.16. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 12.2.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 12.2.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 12.3. Installing a cluster on OpenStack on your own infrastructure In OpenShift Container Platform version 4.7, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 12.3.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Verify that OpenShift Container Platform 4.7 is compatible with your RHOSP version by using the "Supported platforms for OpenShift clusters" section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . Verify that your network configuration does not rely on a provider network. Provider networks are not supported. Have an RHOSP account where you want to install OpenShift Container Platform. On the machine from which you run the installation program, have: A single directory in which you can keep the files you create during the installation process Python 3 12.3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 12.3.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 12.13. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 12.3.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory, 4 vCPUs, and 100 GB storage space 12.3.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory, 2 vCPUs, and 100 GB storage space Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 12.3.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory, 4 vCPUs, and 100 GB storage space 12.3.4. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 12.3.5. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 12.3.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 12.3.7. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 12.3.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.7 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 12.3.9. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 12.3.10. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 12.3.10.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 12.3.10.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 12.3.11. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 12.3.12. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. 12.3.13. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 12.3.13.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 12.14. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 12.3.13.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 12.15. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 12.3.13.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 12.16. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 12.3.13.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 12.17. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 12.3.13.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 12.18. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 12.3.13.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 12.3.13.7. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 12.3.13.8. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 12.3.13.9. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 12.3.14. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to create the cluster. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. + Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. The following files are generated in the directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 12.3.15. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 12.3.16. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 12.3.17. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform post-installation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" Optionally, you can use the inventory.yaml file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines. 12.3.17.1. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . The RHOSP network supports both VM and bare metal server attachment. Your network configuration does not rely on a provider network. Provider networks are not supported. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an inventory.yaml file as part of the OpenShift Container Platform installation process. Procedure In the inventory.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor. Change the value of os_flavor_worker to a bare metal flavor. An example bare metal inventory.yaml file all: hosts: localhost: ansible_connection: local ansible_python_interpreter: "{{ansible_playbook_python}}" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external' ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: ./openshift-install wait-for install-complete --log-level debug 12.3.18. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 12.3.19. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.14.6+f9b5405 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 12.3.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.3.21. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 12.3.22. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 12.3.23. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 12.3.24. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 12.3.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 12.3.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 12.4. Installing a cluster on OpenStack with Kuryr on your own infrastructure In OpenShift Container Platform version 4.7, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 12.4.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Verify that OpenShift Container Platform 4.7 is compatible with your RHOSP version by using the "Supported platforms for OpenShift clusters" section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . Verify that your network configuration does not rely on a provider network. Provider networks are not supported. Have an RHOSP account where you want to install OpenShift Container Platform. On the machine from which you run the installation program, have: A single directory in which you can keep the files you create during the installation process Python 3 12.4.2. About Kuryr SDN Kuryr is a container network interface (CNI) plug-in solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace: kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object. kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object. The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs. Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network. If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial. Kuryr is not recommended in deployments where all of the following criteria are true: The RHOSP version is less than 16. The deployment uses UDP services, or a large number of TCP services on few hypervisors. or The ovn-octavia Octavia driver is disabled. The deployment uses a large number of TCP services on few hypervisors. 12.4.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires. Use the following quota to satisfy a default cluster's minimum requirements: Table 12.19. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr Resource Value Floating IP addresses 3 - plus the expected number of Services of LoadBalancer type Ports 1500 - 1 needed per Pod Routers 1 Subnets 250 - 1 needed per Namespace/Project Networks 250 - 1 needed per Namespace/Project RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 250 - 1 needed per Service and per NetworkPolicy Security group rules 1000 Load balancers 100 - 1 needed per Service Load balancer listeners 500 - 1 needed per Service-exposed port Load balancer pools 500 - 1 needed per Service-exposed port A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Important If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects. Take the following notes into consideration when setting resources: The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time. Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group. Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota. If you are using RHOSP version 15 or earlier, or the ovn-octavia driver , each load balancer has a security group with the user project. The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment's size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them. If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. To enable Kuryr SDN, your environment must meet the following requirements: Run RHOSP 13+. Have Overcloud with Octavia. Use Neutron Trunk ports extension. Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid . 12.4.3.1. Increasing quota When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies. Procedure Increase the quotas for a project by running the following command: USD sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project> 12.4.3.2. Configuring Neutron Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work. In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies. 12.4.3.3. Configuring Octavia Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN. To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update. Note The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary. This example uses the local registry method. Procedure If you are using the local registry, create a template to upload the images to the registry. For example: (undercloud) USD openstack overcloud container image prepare \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=<local-ip-from-undercloud.conf>:8787 \ --prefix=openstack- \ --tag-from-label {version}-{product-version} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml Verify that the local_registry_images.yaml file contains the Octavia images. For example: ... - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787 Note The Octavia container versions vary depending upon the specific RHOSP release installed. Pull the container images from registry.redhat.io to the Undercloud node: (undercloud) USD sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose This may take some time depending on the speed of your network and Undercloud disk. Since an Octavia load balancer is used to access the OpenShift Container Platform API, you must increase their listeners' default timeouts for the connections. The default timeout is 50 seconds. Increase the timeout to 20 minutes by passing the following file to the Overcloud deploy command: (undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000 Note This is not needed for RHOSP 13.0.13+. Install or update your Overcloud environment with Octavia: USD openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ -e octavia_timeouts.yaml Note This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director . Note When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN. In RHOSP versions earlier than 13.0.13, add the project ID to the octavia.conf configuration file after you create the project. To enforce network policies across services, like when traffic goes through the Octavia load balancer, you must ensure Octavia creates the Amphora VM security groups on the user project. This change ensures that required load balancer security groups belong to that project, and that they can be updated to enforce services isolation. Note This task is unnecessary in RHOSP version 13.0.13 or later. Octavia implements a new ACL API that restricts access to the load balancers VIP. Get the project ID USD openstack project show <project> Example output +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+ Add the project ID to octavia.conf for the controllers. Source the stackrc file: USD source stackrc # Undercloud credentials List the Overcloud controllers: USD openstack server list Example output +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ SSH into the controller(s). USD ssh [email protected] Edit the octavia.conf file to add the project into the list of projects where Amphora security groups are on the user's account. Restart the Octavia worker so the new configuration loads. controller-0USD sudo docker restart octavia_worker Note Depending on your RHOSP environment, Octavia might not support UDP listeners. If you use Kuryr SDN on RHOSP version 13.0.13 or earlier, UDP services are not supported. RHOSP version 16 or later support UDP. 12.4.3.3.1. The Octavia OVN Driver Octavia supports multiple provider drivers through the Octavia API. To see all available Octavia provider drivers, on a command line, enter: USD openstack loadbalancer provider list Example output +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ Beginning with RHOSP version 16, the Octavia OVN provider driver ( ovn ) is supported on OpenShift Container Platform on RHOSP deployments. ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2. The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it. If Kuryr uses ovn instead of Amphora, it offers the following benefits: Decreased resource requirements. Kuryr does not require a load balancer VM for each service. Reduced network latency. Increased service creation speed by using OpenFlow rules instead of a VM for each service. Distributed load balancing actions across all nodes instead of centralized on Amphora VMs. 12.4.3.4. Known limitations of installing with Kuryr Using OpenShift Container Platform with Kuryr SDN has several known limitations. RHOSP general limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments: Service objects with the NodePort type are not supported. Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods. If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer . Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting. RHOSP version limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version. RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources. Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP. Octavia RHOSP versions before 13.0.13 do not support UDP listeners. Therefore, OpenShift Container Platform UDP services are not supported. Octavia RHOSP versions before 13.0.13 cannot listen to multiple protocols on the same port. Services that expose the same port to different protocols, like TCP and UDP, are not supported. Kuryr SDN does not support automatic unidling by a service. RHOSP environment limitations There are limitations when using Kuryr SDN that depend on your deployment environment. Because of Octavia's lack of support for the UDP protocol and multiple listeners, if the RHOSP version is earlier than 13.0.13, Kuryr forces pods to use TCP for DNS resolution. In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only. In this case, the native Go resolver does not recognize the use-vc option in resolv.conf , which controls whether TCP is forced for DNS resolution. As a result, UDP is still used for DNS resolution, which fails. To ensure that TCP forcing is allowed, compile applications either with the environment variable CGO_ENABLED set to 1 , i.e. CGO_ENABLED=1 , or ensure that the variable is absent. In Go versions 1.13 and later, TCP is used automatically if DNS resolution using UDP fails. Note musl-based containers, including Alpine-based containers, do not support the use-vc option. RHOSP upgrade limitations As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required. You can address API changes on an individual basis. If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways: Upgrade each VM by triggering a load balancer failover . Leave responsibility for upgrading the VMs to users. If the operator takes the first option, there might be short downtimes during failovers. If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features. Important If OpenShift Container Platform detects a new Octavia version that supports UDP load balancing, it recreates the DNS service automatically. The service recreation ensures that the service default supports UDP load balancing. The recreation causes the DNS service approximately one minute of downtime. 12.4.3.5. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory, 4 vCPUs, and 100 GB storage space 12.4.3.6. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory, 2 vCPUs, and 100 GB storage space Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 12.4.3.7. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory, 4 vCPUs, and 100 GB storage space 12.4.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 12.4.5. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 12.4.6. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 12.4.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 12.4.8. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 12.4.9. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.7 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 12.4.10. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 12.4.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 12.4.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 12.4.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 12.4.12. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 12.4.13. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. 12.4.14. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 12.4.14.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 12.20. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 12.4.14.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 12.21. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 12.4.14.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 12.22. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 12.4.14.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 12.23. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 12.4.14.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 12.24. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 12.4.14.6. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 12.4.14.7. Sample customized install-config.yaml file for RHOSP with Kuryr To deploy with Kuryr SDN instead of the default OpenShift SDN, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType and proceed with the default OpenShift Container Platform SDN installation steps. This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts. 2 3 Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services. 12.4.14.8. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add ports to the pool when it is created, such as when a new host is added, or a new namespace is created. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 12.4.14.9. Adjusting Kuryr ports pools during installation During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation. Prerequisites Create and modify the install-config.yaml file. Procedure From a command line, create the manifest files: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-network-03-config.yml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-network-* Example output cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5 1 Set the value of enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports after a namespace is created or a new node is added to the cluster. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . 5 The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia's LoadBalancers. If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork , and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork . The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter. If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1. Save the cluster-network-03-config.yml file, and exit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster. 12.4.14.10. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 12.4.14.11. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 12.4.14.12. Modifying the network type By default, the installation program selects the OpenShiftSDN network type. To use Kuryr instead, change the value in the installation configuration file that the program generated. Prerequisites You have the file install-config.yaml that was generated by the OpenShift Container Platform installation program Procedure In a command prompt, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["networkType"] = "Kuryr"; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set networking.networkType to "Kuryr" . 12.4.15. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to create the cluster. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. + Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. The following files are generated in the directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 12.4.16. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 12.4.17. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 12.4.18. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform post-installation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" 12.4.19. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 12.4.20. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.14.6+f9b5405 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 12.4.21. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.4.22. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 12.4.23. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 12.4.24. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 12.4.25. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 12.4.26. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 12.4.27. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 12.5. Installing a cluster on OpenStack on your own SR-IOV infrastructure In OpenShift Container Platform 4.7, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure and uses single-root input/output virtualization (SR-IOV) networks to run compute machines. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, such as Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 12.5.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Verify that OpenShift Container Platform 4.7 is compatible with your RHOSP version by using the "Supported platforms for OpenShift clusters" section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . Verify that your network configuration does not rely on a provider network. Provider networks are not supported. Have an RHOSP account where you want to install OpenShift Container Platform. On the machine where you run the installation program, have: A single directory in which you can keep the files you create during the installation process Python 3 12.5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 12.5.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 12.25. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 12.5.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory, 4 vCPUs, and 100 GB storage space 12.5.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory, 2 vCPUs, and 100 GB storage space Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. Additionally, for clusters that use single-root input/output virtualization (SR-IOV), RHOSP compute nodes require a flavor that supports huge pages . Important SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure. Additional resources For more information about configuring performant RHOSP compute nodes, see Configuring Compute nodes for performance . 12.5.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory, 4 vCPUs, and 100 GB storage space 12.5.4. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 12.5.5. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 12.5.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 12.5.7. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 12.5.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.7 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 12.5.9. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 12.5.10. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 12.5.10.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 12.5.10.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 12.5.11. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 12.5.12. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. 12.5.13. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 12.5.13.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 12.26. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 12.5.13.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 12.27. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 12.5.13.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 12.28. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 12.5.13.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 12.29. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 12.5.13.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 12.30. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 12.5.13.6. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 12.5.13.7. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIP and platform.openstack.ingressVIP that are outside of the DHCP allocation pool. 12.5.13.8. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 12.5.13.9. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 12.5.14. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to create the cluster. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. + Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. The following files are generated in the directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 12.5.15. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 12.5.16. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 12.5.17. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform post-installation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" Optionally, you can use the inventory.yaml file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines. 12.5.17.1. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . The RHOSP network supports both VM and bare metal server attachment. Your network configuration does not rely on a provider network. Provider networks are not supported. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an inventory.yaml file as part of the OpenShift Container Platform installation process. Procedure In the inventory.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor. Change the value of os_flavor_worker to a bare metal flavor. An example bare metal inventory.yaml file all: hosts: localhost: ansible_connection: local ansible_python_interpreter: "{{ansible_playbook_python}}" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external' ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: ./openshift-install wait-for install-complete --log-level debug 12.5.18. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 12.5.19. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.14.6+f9b5405 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 12.5.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.5.21. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 12.5.22. Creating SR-IOV networks for compute machines If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV) , you can provision SR-IOV networks that compute machines run on. Note The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required. Prerequisites Your cluster supports SR-IOV. Note If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation. You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks. Procedure On a command line, create a radio RHOSP network: USD openstack network create radio --provider-physical-network radio --provider-network-type flat --external Create an uplink RHOSP network: USD openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external Create a subnet for the radio network: USD openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio Create a subnet for the uplink network: USD openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink 12.5.23. Creating compute machines that run on SR-IOV networks After standing up the control plane, create compute machines that run on the SR-IOV networks that you created in "Creating SR-IOV networks for compute machines". Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The metadata.yaml file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. You created radio and uplink SR-IOV networks as described in "Creating SR-IOV networks for compute machines". Procedure On a command line, change the working directory to the location of the inventory.yaml and common.yaml files. Add the radio and uplink networks to the end of the inventory.yaml file by using the additionalNetworks parameter: .... # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' additionalNetworks: - id: radio count: 4 1 type: direct port_security_enabled: no - id: uplink count: 4 2 type: direct port_security_enabled: no 1 2 The count parameter defines the number of SR-IOV virtual functions (VFs) to attach to each worker node. In this case, each network has four VFs. Replace the content of the compute-nodes.yaml file with the following text: Example 12.1. compute-nodes.yaml - import_playbook: common.yaml - hosts: all gather_facts: no vars: worker_list: [] port_name_list: [] nic_list: [] tasks: # Create the SDN/primary port for each worker node - name: 'Create the Compute ports' os_port: name: "{{ item.1 }}-{{ item.0 }}" network: "{{ os_network }}" security_groups: - "{{ os_sg_worker }}" allowed_address_pairs: - ip_address: "{{ os_ingressVIP }}" with_indexed_items: "{{ [os_port_worker] * os_compute_nodes_number }}" register: ports # Tag each SDN/primary port with cluster name - name: 'Set Compute ports tag' command: cmd: "openstack port set --tag {{ cluster_id_tag }} {{ item.1 }}-{{ item.0 }}" with_indexed_items: "{{ [os_port_worker] * os_compute_nodes_number }}" - name: 'List the Compute Trunks' command: cmd: "openstack network trunk list" when: os_networking_type == "Kuryr" register: compute_trunks - name: 'Create the Compute trunks' command: cmd: "openstack network trunk create --parent-port {{ item.1.id }} {{ os_compute_trunk_name }}-{{ item.0 }}" with_indexed_items: "{{ ports.results }}" when: - os_networking_type == "Kuryr" - "os_compute_trunk_name|string not in compute_trunks.stdout" - name: 'Call additional-port processing' include_tasks: additional-ports.yaml # Create additional ports in OpenStack - name: 'Create additionalNetworks ports' os_port: name: "{{ item.0 }}-{{ item.1.name }}" vnic_type: "{{ item.1.type }}" network: "{{ item.1.uuid }}" port_security_enabled: "{{ item.1.port_security_enabled|default(omit) }}" no_security_groups: "{{ 'true' if item.1.security_groups is not defined else omit }}" security_groups: "{{ item.1.security_groups | default(omit) }}" with_nested: - "{{ worker_list }}" - "{{ port_name_list }}" # Tag the ports with the cluster info - name: 'Set additionalNetworks ports tag' command: cmd: "openstack port set --tag {{ cluster_id_tag }} {{ item.0 }}-{{ item.1.name }}" with_nested: - "{{ worker_list }}" - "{{ port_name_list }}" # Build the nic list to use for server create - name: Build nic list set_fact: nic_list: "{{ nic_list | default([]) + [ item.name ] }}" with_items: "{{ port_name_list }}" # Create the servers - name: 'Create the Compute servers' vars: worker_nics: "{{ [ item.1 ] | product(nic_list) | map('join','-') | map('regex_replace', '(.*)', 'port-name=\\1') | list }}" os_server: name: "{{ item.1 }}" image: "{{ os_image_rhcos }}" flavor: "{{ os_flavor_worker }}" auto_ip: no userdata: "{{ lookup('file', 'worker.ign') | string }}" security_groups: [] nics: "{{ [ 'port-name=' + os_port_worker + '-' + item.0|string ] + worker_nics }}" config_drive: yes with_indexed_items: "{{ worker_list }}" Insert the following content into a local file that is called additional-ports.yaml : Example 12.2. additional-ports.yaml # Build a list of worker nodes with indexes - name: 'Build worker list' set_fact: worker_list: "{{ worker_list | default([]) + [ item.1 + '-' + item.0 | string ] }}" with_indexed_items: "{{ [ os_compute_server_name ] * os_compute_nodes_number }}" # Ensure that each network specified in additionalNetworks exists - name: 'Verify additionalNetworks' os_networks_info: name: "{{ item.id }}" with_items: "{{ additionalNetworks }}" register: network_info # Expand additionalNetworks by the count parameter in each network definition - name: 'Build port and port index list for additionalNetworks' set_fact: port_list: "{{ port_list | default([]) + [ { 'net_name' : item.1.id, 'uuid' : network_info.results[item.0].openstack_networks[0].id, 'type' : item.1.type|default('normal'), 'security_groups' : item.1.security_groups|default(omit), 'port_security_enabled' : item.1.port_security_enabled|default(omit) } ] * item.1.count|default(1) }}" index_list: "{{ index_list | default([]) + range(item.1.count|default(1)) | list }}" with_indexed_items: "{{ additionalNetworks }}" # Calculate and save the name of the port # The format of the name is cluster_name-worker-workerID-networkUUID(partial)-count # i.e. fdp-nz995-worker-1-99bcd111-1 - name: 'Calculate port name' set_fact: port_name_list: "{{ port_name_list | default([]) + [ item.1 | combine( {'name' : item.1.uuid | regex_search('([^-]+)') + '-' + index_list[item.0]|string } ) ] }}" with_indexed_items: "{{ port_list }}" when: port_list is defined On a command line, run the compute-nodes.yaml playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml 12.5.24. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 12.5.25. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. The cluster is operational. Before you can configure it for SR-IOV networks though, you must perform additional tasks. 12.5.26. Preparing a cluster that runs on RHOSP for SR-IOV Before you use single root I/O virtualization (SR-IOV) on a cluster that runs on Red Hat OpenStack Platform (RHOSP), make the RHOSP metadata service mountable as a drive and enable the No-IOMMU Operator for the virtual function I/O (VFIO) driver. 12.5.26.1. Enabling the RHOSP metadata service as a mountable drive You can apply a machine config to your machine pool that makes the Red Hat OpenStack Platform (RHOSP) metadata service available as a mountable drive. The following machine config enables the display of RHOSP network UUIDs from within the SR-IOV Network Operator. This configuration simplifies the association of SR-IOV resources to cluster SR-IOV resources. Procedure Create a machine config file from the following template: A mountable metadata service machine config file kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 20-mount-config 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - name: create-mountpoint-var-config.service enabled: true contents: | [Unit] Description=Create mountpoint /var/config Before=kubelet.service [Service] ExecStart=/bin/mkdir -p /var/config [Install] WantedBy=var-config.mount - name: var-config.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] Where=/var/config What=/dev/disk/by-label/config-2 [Install] WantedBy=local-fs.target 1 You can substitute a name of your choice. From a command line, apply the machine config: USD oc apply -f <machine_config_file_name>.yaml 12.5.26.2. Enabling the No-IOMMU feature for the RHOSP VFIO driver You can apply a machine config to your machine pool that enables the No-IOMMU feature for the Red Hat OpenStack Platform (RHOSP) virtual function I/O (VFIO) driver. The RHOSP vfio-pci driver requires this feature. Procedure Create a machine config file from the following template: A No-IOMMU VFIO machine config file kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 99-vfio-noiommu 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/vfio-noiommu.conf mode: 0644 contents: source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK 1 You can substitute a name of your choice. From a command line, apply the machine config: USD oc apply -f <machine_config_file_name>.yaml Note After you apply the machine config to the machine pool, you can watch the machine config pool status to see when the machines are available. The cluster is installed and prepared for SR-IOV configuration. You must now perform the SR-IOV configuration tasks in " steps". 12.5.27. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 12.5.28. Additional resources See Performance Addon Operator for low latency nodes for information about configuring your deployment for real-time running and low latency. 12.5.29. steps To complete SR-IOV configuration for your cluster: Install the Performance Addon Operator . Configure the Performance Addon Operator with huge pages support . Install the SR-IOV Operator . Configure your SR-IOV network device . Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 12.6. Installing a cluster on OpenStack in a restricted network In OpenShift Container Platform 4.7, you can install a cluster on Red Hat OpenStack Platform (RHOSP) in a restricted network by creating an internal mirror of the installation release content. Prerequisites Create a registry on your mirror host and obtain the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. Review details about the OpenShift Container Platform installation and update processes . Verify that OpenShift Container Platform 4.7 is compatible with your RHOSP version by using the "Supported platforms for OpenShift clusters" section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . Verify that your network configuration does not rely on a provider network. Provider networks are not supported. Have the metadata service enabled in RHOSP. 12.6.1. About installations in restricted networks In OpenShift Container Platform 4.7, you can perform an installation that does not require an active connection to the Internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less Internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the Internet and your closed network, or by using other methods that meet your restrictions. 12.6.1.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 12.6.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 12.31. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 12.6.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory, 4 vCPUs, and 100 GB storage space 12.6.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory, 2 vCPUs, and 100 GB storage space Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 12.6.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory, 4 vCPUs, and 100 GB storage space 12.6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to obtain the images that are necessary to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 12.6.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 12.6.5. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 12.6.6. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network Red Hat OpenStack Platform (RHOSP) environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.7 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) image. Decompress the image. Note You must decompress the image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: Upload the image that you decompressed to a location that is accessible from the bastion server, like Glance. For example: Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 12.6.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Have the imageContentSources values that were generated during mirror registry creation. Obtain the contents of the certificate for your mirror registry. Retrieve a Red Hat Enterprise Linux CoreOS (RHCOS) image and upload it to an accessible location. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . In the install-config.yaml file, set the value of platform.openstack.clusterOSImage to the image location or name. For example: platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to provide the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry, which can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which look like this excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.example.com/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.example.com/ocp/release To complete these values, use the imageContentSources that you recorded during mirror registry creation. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 12.6.7.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.6.7.2. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 12.6.7.2.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 12.32. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 12.6.7.2.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 12.33. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 12.6.7.2.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 12.34. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 12.6.7.2.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 12.35. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 12.6.7.2.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 12.36. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installer relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . platform.openstack.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 12.6.7.3. Sample customized install-config.yaml file for restricted OpenStack installations This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineCIDR: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 12.6.8. Setting compute machine affinity Optionally, you can set the affinity policy for compute machines during installation. The installer does not select an affinity policy for compute machines by default. You can also create machine sets that use particular RHOSP server groups after installation. Note Control plane machines are created with a soft-anti-affinity policy. Tip You can learn more about RHOSP instance scheduling and placement in the RHOSP documentation. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Using the RHOSP command-line interface, create a server group for your compute machines. For example: USD openstack \ --os-compute-api-version=2.15 \ server group create \ --policy anti-affinity \ my-openshift-worker-group For more information, see the server group create command documentation . Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir=<installation_directory> where: installation_directory Specifies the name of the directory that contains the install-config.yaml file for your cluster. Open manifests/99_openshift-cluster-api_worker-machineset-0.yaml , the MachineSet definition file. Add the property serverGroupID to the definition beneath the spec.template.spec.providerSpec.value property. For example: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_ID>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee 1 kind: OpenstackProviderSpec networks: - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_ID> securityGroups: - filter: {} name: <infrastructure_ID>-<node_role> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_ID> tags: - openshiftClusterID=<infrastructure_ID> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> 1 Add the UUID of your server group here. Optional: Back up the manifests/99_openshift-cluster-api_worker-machineset-0.yaml file. The installation program deletes the manifests/ directory when creating the cluster. When you install the cluster, the installer uses the MachineSet definition that you modified to create compute machines within your RHOSP server group. 12.6.9. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 12.6.10. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 12.6.10.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 12.6.10.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 12.6.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 12.6.12. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 12.6.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 12.6.14. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Global Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 12.6.15. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 12.6.16. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . 12.7. Uninstalling a cluster on OpenStack You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP). 12.7.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites Have a copy of the installation program that you used to deploy the cluster. Have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 12.8. Uninstalling a cluster on RHOSP from your own infrastructure You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP) on user-provisioned infrastructure. 12.8.1. Downloading playbook dependencies The Ansible playbooks that simplify the removal process on user-provisioned infrastructure require several Python modules. On the machine where you will run the process, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 12.8.2. Removing a cluster from RHOSP that uses your own infrastructure You can remove an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) that uses your own infrastructure. To complete the removal process quickly, run several Ansible playbooks. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies." You have the playbooks that you used to install the cluster. You modified the playbooks that are prefixed with down- to reflect any changes that you made to their corresponding installation playbooks. For example, changes to the bootstrap.yaml file are reflected in the down-bootstrap.yaml file. All of the playbooks are in a common directory. Procedure On a command line, run the playbooks that you downloaded: USD ansible-playbook -i inventory.yaml \ down-bootstrap.yaml \ down-control-plane.yaml \ down-compute-nodes.yaml \ down-load-balancers.yaml \ down-network.yaml \ down-security-groups.yaml Remove any DNS record changes you made for the OpenShift Container Platform installation. OpenShift Container Platform is removed from your infrastructure. | [
"openstack role add --user <user> --project <project> swiftoperator",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"openstack --os-compute-api-version=2.15 server group create --policy anti-affinity my-openshift-worker-group",
"./openshift-install create manifests --dir=<installation_directory>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_ID>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee 1 kind: OpenstackProviderSpec networks: - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_ID> securityGroups: - filter: {} name: <infrastructure_ID>-<node_role> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_ID> tags: - openshiftClusterID=<infrastructure_ID> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>",
"(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml",
"- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787",
"(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose",
"(undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml",
"openstack project show <project>",
"+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+",
"source stackrc # Undercloud credentials",
"openstack server list",
"+--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+",
"ssh [email protected]",
"List of project IDs that are allowed to have Load balancer security groups belonging to them. amp_secgroup_allowed_projects = PROJECT_ID",
"controller-0USD sudo docker restart octavia_worker",
"openstack loadbalancer provider list",
"+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+",
"openstack role add --user <user> --project <project> swiftoperator",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-network-03-config.yml 1",
"ls <installation_directory>/manifests/cluster-network-*",
"cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5",
"openstack --os-compute-api-version=2.15 server group create --policy anti-affinity my-openshift-worker-group",
"./openshift-install create manifests --dir=<installation_directory>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_ID>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee 1 kind: OpenstackProviderSpec networks: - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_ID> securityGroups: - filter: {} name: <infrastructure_ID>-<node_role> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_ID> tags: - openshiftClusterID=<infrastructure_ID> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-containers.yaml'",
"tar xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml network.yaml",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'",
"./openshift-install wait-for install-complete --log-level debug",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.14.6+f9b5405 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0",
"openshift-install --log-level debug wait-for install-complete",
"sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>",
"(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml",
"- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787",
"(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose",
"(undercloud) USD cat octavia_timeouts.yaml parameter_defaults: OctaviaTimeoutClientData: 1200000 OctaviaTimeoutMemberData: 1200000",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml",
"openstack project show <project>",
"+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | enabled | True | | id | PROJECT_ID | | is_domain | False | | name | *<project>* | | parent_id | default | | tags | [] | +-------------+----------------------------------+",
"source stackrc # Undercloud credentials",
"openstack server list",
"+--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | ID | Name | Status | Networks | Image | Flavor | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+ │ | 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | controller | │ | dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE | ctlplane=192.168.24.6 | overcloud-full | compute | │ +--------------------------------------+--------------+--------+-----------------------+----------------+------------+",
"ssh [email protected]",
"List of project IDs that are allowed to have Load balancer security groups belonging to them. amp_secgroup_allowed_projects = PROJECT_ID",
"controller-0USD sudo docker restart octavia_worker",
"openstack loadbalancer provider list",
"+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-containers.yaml'",
"tar xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 2 octaviaSupport: true 3 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-network-03-config.yml 1",
"ls <installation_directory>/manifests/cluster-network-*",
"cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"networkType\"] = \"Kuryr\"; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml network.yaml",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.14.6+f9b5405 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0",
"openshift-install --log-level debug wait-for install-complete",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.7/upi/openstack/down-containers.yaml'",
"tar xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml network.yaml",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'",
"./openshift-install wait-for install-complete --log-level debug",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.14.6+f9b5405 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"openstack network create radio --provider-physical-network radio --provider-network-type flat --external",
"openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external",
"openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio",
"openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink",
". If this value is non-empty, the corresponding floating IP will be attached to the bootstrap machine. This is needed for collecting logs in case of install failure. os_bootstrap_fip: '203.0.113.20' additionalNetworks: - id: radio count: 4 1 type: direct port_security_enabled: no - id: uplink count: 4 2 type: direct port_security_enabled: no",
"- import_playbook: common.yaml - hosts: all gather_facts: no vars: worker_list: [] port_name_list: [] nic_list: [] tasks: # Create the SDN/primary port for each worker node - name: 'Create the Compute ports' os_port: name: \"{{ item.1 }}-{{ item.0 }}\" network: \"{{ os_network }}\" security_groups: - \"{{ os_sg_worker }}\" allowed_address_pairs: - ip_address: \"{{ os_ingressVIP }}\" with_indexed_items: \"{{ [os_port_worker] * os_compute_nodes_number }}\" register: ports # Tag each SDN/primary port with cluster name - name: 'Set Compute ports tag' command: cmd: \"openstack port set --tag {{ cluster_id_tag }} {{ item.1 }}-{{ item.0 }}\" with_indexed_items: \"{{ [os_port_worker] * os_compute_nodes_number }}\" - name: 'List the Compute Trunks' command: cmd: \"openstack network trunk list\" when: os_networking_type == \"Kuryr\" register: compute_trunks - name: 'Create the Compute trunks' command: cmd: \"openstack network trunk create --parent-port {{ item.1.id }} {{ os_compute_trunk_name }}-{{ item.0 }}\" with_indexed_items: \"{{ ports.results }}\" when: - os_networking_type == \"Kuryr\" - \"os_compute_trunk_name|string not in compute_trunks.stdout\" - name: 'Call additional-port processing' include_tasks: additional-ports.yaml # Create additional ports in OpenStack - name: 'Create additionalNetworks ports' os_port: name: \"{{ item.0 }}-{{ item.1.name }}\" vnic_type: \"{{ item.1.type }}\" network: \"{{ item.1.uuid }}\" port_security_enabled: \"{{ item.1.port_security_enabled|default(omit) }}\" no_security_groups: \"{{ 'true' if item.1.security_groups is not defined else omit }}\" security_groups: \"{{ item.1.security_groups | default(omit) }}\" with_nested: - \"{{ worker_list }}\" - \"{{ port_name_list }}\" # Tag the ports with the cluster info - name: 'Set additionalNetworks ports tag' command: cmd: \"openstack port set --tag {{ cluster_id_tag }} {{ item.0 }}-{{ item.1.name }}\" with_nested: - \"{{ worker_list }}\" - \"{{ port_name_list }}\" # Build the nic list to use for server create - name: Build nic list set_fact: nic_list: \"{{ nic_list | default([]) + [ item.name ] }}\" with_items: \"{{ port_name_list }}\" # Create the servers - name: 'Create the Compute servers' vars: worker_nics: \"{{ [ item.1 ] | product(nic_list) | map('join','-') | map('regex_replace', '(.*)', 'port-name=\\\\1') | list }}\" os_server: name: \"{{ item.1 }}\" image: \"{{ os_image_rhcos }}\" flavor: \"{{ os_flavor_worker }}\" auto_ip: no userdata: \"{{ lookup('file', 'worker.ign') | string }}\" security_groups: [] nics: \"{{ [ 'port-name=' + os_port_worker + '-' + item.0|string ] + worker_nics }}\" config_drive: yes with_indexed_items: \"{{ worker_list }}\"",
"Build a list of worker nodes with indexes - name: 'Build worker list' set_fact: worker_list: \"{{ worker_list | default([]) + [ item.1 + '-' + item.0 | string ] }}\" with_indexed_items: \"{{ [ os_compute_server_name ] * os_compute_nodes_number }}\" Ensure that each network specified in additionalNetworks exists - name: 'Verify additionalNetworks' os_networks_info: name: \"{{ item.id }}\" with_items: \"{{ additionalNetworks }}\" register: network_info Expand additionalNetworks by the count parameter in each network definition - name: 'Build port and port index list for additionalNetworks' set_fact: port_list: \"{{ port_list | default([]) + [ { 'net_name' : item.1.id, 'uuid' : network_info.results[item.0].openstack_networks[0].id, 'type' : item.1.type|default('normal'), 'security_groups' : item.1.security_groups|default(omit), 'port_security_enabled' : item.1.port_security_enabled|default(omit) } ] * item.1.count|default(1) }}\" index_list: \"{{ index_list | default([]) + range(item.1.count|default(1)) | list }}\" with_indexed_items: \"{{ additionalNetworks }}\" Calculate and save the name of the port The format of the name is cluster_name-worker-workerID-networkUUID(partial)-count i.e. fdp-nz995-worker-1-99bcd111-1 - name: 'Calculate port name' set_fact: port_name_list: \"{{ port_name_list | default([]) + [ item.1 | combine( {'name' : item.1.uuid | regex_search('([^-]+)') + '-' + index_list[item.0]|string } ) ] }}\" with_indexed_items: \"{{ port_list }}\" when: port_list is defined",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0",
"openshift-install --log-level debug wait-for install-complete",
"kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 20-mount-config 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - name: create-mountpoint-var-config.service enabled: true contents: | [Unit] Description=Create mountpoint /var/config Before=kubelet.service [Service] ExecStart=/bin/mkdir -p /var/config [Install] WantedBy=var-config.mount - name: var-config.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] Where=/var/config What=/dev/disk/by-label/config-2 [Install] WantedBy=local-fs.target",
"oc apply -f <machine_config_file_name>.yaml",
"kind: MachineConfig apiVersion: machineconfiguration.openshift.io/v1 metadata: name: 99-vfio-noiommu 1 labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/vfio-noiommu.conf mode: 0644 contents: source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK",
"oc apply -f <machine_config_file_name>.yaml",
"openstack role add --user <user> --project <project> swiftoperator",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: shiftstack_user password: XXX user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: 'devuser' password: XXX project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"file <name_of_downloaded_file>",
"openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-USD{RHCOS_VERSION}",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.example.com/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.example.com/ocp/release",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineCIDR: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OpenShiftSDN platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"openstack --os-compute-api-version=2.15 server group create --policy anti-affinity my-openshift-worker-group",
"./openshift-install create manifests --dir=<installation_directory>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_ID>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_ID> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_ID>-<node_role> spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee 1 kind: OpenstackProviderSpec networks: - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_ID> securityGroups: - filter: {} name: <infrastructure_ID>-<node_role> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_ID> tags: - openshiftClusterID=<infrastructure_ID> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk",
"sudo alternatives --set python /usr/bin/python3",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml down-control-plane.yaml down-compute-nodes.yaml down-load-balancers.yaml down-network.yaml down-security-groups.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/installing/installing-on-openstack |
Appendix I. Ceph scrubbing options | Appendix I. Ceph scrubbing options Ceph ensures data integrity by scrubbing placement groups. The following are the Ceph scrubbing options that you can adjust to increase or decrease scrubbing operations. You can set these configuration options with the ceph config set global CONFIGURATION_OPTION VALUE command. mds_max_scrub_ops_in_progress Description The maximum number of scrub operations performed in parallel. You can set this value with ceph config set mds_max_scrub_ops_in_progress VALUE command. Type integer Default 5 osd_max_scrubs Description The maximum number of simultaneous scrub operations for a Ceph OSD Daemon. Type integer Default 1 osd_scrub_begin_hour Description The specific hour at which the scrubbing begins. Along with osd_scrub_end_hour , you can define a time window in which the scrubs can happen. Use osd_scrub_begin_hour = 0 and osd_scrub_end_hour = 0 to allow scrubbing the entire day. Type integer Default 0 Allowed range [0, 23] osd_scrub_end_hour Description The specific hour at which the scrubbing ends. Along with osd_scrub_begin_hour , you can define a time window, in which the scrubs can happen. Use osd_scrub_begin_hour = 0 and osd_scrub_end_hour = 0 to allow scrubbing for the entire day. Type integer Default 0 Allowed range [0, 23] osd_scrub_begin_week_day Description The specific day on which the scrubbing begins. 0 = Sunday, 1 = Monday, etc. Along with "osd_scrub_end_week_day", you can define a time window in which scrubs can happen. Use osd_scrub_begin_week_day = 0 and osd_scrub_end_week_day = 0 to allow scrubbing for the entire week. Type integer Default 0 Allowed range [0, 6] osd_scrub_end_week_day Description This defines the day on which the scrubbing ends. 0 = Sunday, 1 = Monday, etc. Along with osd_scrub_begin_week_day , they define a time window, in which the scrubs can happen. Use osd_scrub_begin_week_day = 0 and osd_scrub_end_week_day = 0 to allow scrubbing for the entire week. Type integer Default 0 Allowed range [0, 6] osd_scrub_during_recovery Description Allow scrub during recovery. Setting this to false disables scheduling new scrub, and deep-scrub, while there is an active recovery. The already running scrubs continue which is useful to reduce load on busy storage clusters. Type boolean Default false osd_scrub_load_threshold Description The normalized maximum load. Scrubbing does not happen when the system load, as defined by getloadavg() / number of online CPUs, is higher than this defined number. Type float Default 0.5 osd_scrub_min_interval Description The minimal interval in seconds for scrubbing the Ceph OSD daemon when the Ceph storage Cluster load is low. Type float Default 1 day osd_scrub_max_interval Description The maximum interval in seconds for scrubbing the Ceph OSD daemon irrespective of cluster load. Type float Default 7 days osd_scrub_chunk_min Description The minimal number of object store chunks to scrub during a single operation. Ceph blocks writes to a single chunk during scrub. type integer Default 5 osd_scrub_chunk_max Description The maximum number of object store chunks to scrub during a single operation. type integer Default 25 osd_scrub_sleep Description Time to sleep before scrubbing the group of chunks. Increasing this value slows down the overall rate of scrubbing, so that client operations are less impacted. type float Default 0.0 osd_scrub_extended_sleep Description Duration to inject a delay during scrubbing out of scrubbing hours or seconds. type float Default 0.0 osd_scrub_backoff_ratio Description Backoff ratio for scheduling scrubs. This is the percentage of ticks that do NOT schedule scrubs, 66% means that 1 out of 3 ticks schedules scrubs. type float Default 0.66 osd_deep_scrub_interval Description The interval for deep scrubbing, fully reading all data. The osd_scrub_load_threshold does not affect this setting. type float Default 7 days osd_debug_deep_scrub_sleep Description Inject an expensive sleep during deep scrub IO to make it easier to induce preemption. type float Default 0 osd_scrub_interval_randomize_ratio Description Add a random delay to osd_scrub_min_interval when scheduling the scrub job for a placement group. The delay is a random value less than osd_scrub_min_interval * osd_scrub_interval_randomized_ratio . The default setting spreads scrubs throughout the allowed time window of [1, 1.5] * osd_scrub_min_interval . type float Default 0.5 osd_deep_scrub_stride Description Read size when doing a deep scrub. type size Default 512 KB osd_scrub_auto_repair_num_errors Description Auto repair does not occur if more than this many errors are found. type integer Default 5 osd_scrub_auto_repair Description Setting this to true enables automatic Placement Group (PG) repair when errors are found by scrubs or deep-scrubs. However, if more than osd_scrub_auto_repair_num_errors errors are found, a repair is NOT performed. type boolean Default false osd_scrub_max_preemptions Description Set the maximum number of times you need to preempt a deep scrub due to a client operation before blocking client IO to complete the scrub. type integer Default 5 osd_deep_scrub_keys Description Number of keys to read from an object at a time during deep scrub. type integer Default 1024 | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/configuration_guide/ceph-scrubbing-options_conf |
Chapter 3. Administer MicroProfile in JBoss EAP | Chapter 3. Administer MicroProfile in JBoss EAP 3.1. MicroProfile Telemetry administration 3.1.1. Add MicroProfile Telemetry subsystem using the management CLI The MicroProfile Telemetry component is integrated into the default MicroProfile configuration through the microprofile-telemetry subsystem. You can also add the MicroProfile Telemetry subsystem using the management CLI if the subsystem is not included. Prerequisites The OpenTelemetry subsystem must be added to the configuration before adding the MicroProfile Telemetry subsystem. The MicroProfile Telemetry subsystem depends on the OpenTelemetry subsystem. Procedure Open your terminal. Run the following command: 3.1.2. Enable MicroProfile Telemetry subsystem MicroProfile Telemetry is disabled by default and must be enabled on a per-application basis. Prerequisites The MicroProfile Telemetry subsystem has been added to the configuration. The OpenTelemetry subsystem has been added to the configuration. Procedure Open your microprofile-config.properties file. Set the otel.sdk.disabled property to false : 3.1.3. Override server configuration using MicroProfile Config You can override server configuration for individual applications in the MicroProfile Telemetry subsystem using MicroProfile Config. For example, the service name used in exported traces by default is the same as the deployment archive. If the deployment archive is set to my-application-1.0.war , the service name will be the same. To override this configuration, you can change the value of the otel.service.name property in your configuration file: 3.2. MicroProfile Config configuration 3.2.1. Adding properties in a ConfigSource management resource You can store properties directly in a config-source subsystem as a management resource. Procedure Create a ConfigSource and add a property: 3.2.2. Configuring directories as ConfigSources When a property is stored in a directory as a file, the file-name is the name of a property and the file content is the value of the property. Procedure Create a directory where you want to store the files: Navigate to the directory: Create a file name to store the value for the property name : Add the value of the property to the file: Create a ConfigSource in which the file name is the property and the file contents the value of the property: This results in the following XML configuration: <subsystem xmlns="urn:wildfly:microprofile-config-smallrye:1.0"> <config-source name="file-props"> <dir path="/etc/config/prop-files"/> </config-source> </subsystem> 3.2.3. Configuring root directories as ConfigSources You can define a directory as a root directory for multiple MicroProfile ConfigSource directories using the root attribute. The nested root attribute is part of the dir complex attribute for the /subsystem=microprofile-config-smallrye/config-source=* resource. This eliminates the need to specify multiple ConfigSource directories if they share the same root directory. Any files directly within the root directory are ignored. They will not be used for configuration. Top-level directories are treated as ConfigSources. Any nested directories will also be ignored. Note ConfigSources for top-level directories are assigned the ordinal of the /subsystem=microprofile-config-smallrye/config-source=* resource by default. If the top-level directory contains a config_ordinal file, the value specified in the file will override the default ordinal value. If two top-level directories with the same ordinal contain the same entry, the names of the directories are sorted alphabetically and the first directory is used. Prerequisites You have installed the MicroProfile Config extension and enabled the microprofile-config-smallrye subsystem. Procedure Open your terminal. Create a directory where you want to store your files: Navigate to the directory that you created: Create a file name to store the value for the property name : Add the value of the property to the file: Run the following command in the CLI to create a ConfigSource in which the filename is the property and the file contains the value of the property: This results in the XML configuration: 3.2.4. Obtaining ConfigSource from a ConfigSource class You can create and configure a custom org.eclipse.microprofile.config.spi.ConfigSource implementation class to provide a source for the configuration values. Procedure The following management CLI command creates a ConfigSource for the implementation class named org.example.MyConfigSource that is provided by a JBoss module named org.example . If you want to use a ConfigSource from the org.example module, add the <module name="org.eclipse.microprofile.config.api"/> dependency to the path/to/org/example/main/module.xml file. This command results in the following XML configuration for the microprofile-config-smallrye subsystem. <subsystem xmlns="urn:wildfly:microprofile-config-smallrye:1.0"> <config-source name="my-config-source"> <class name="org.example.MyConfigSource" module="org.example"/> </config-source> </subsystem> Properties provided by the custom org.eclipse.microprofile.config.spi.ConfigSource implementation class are available to any JBoss EAP deployment. 3.2.5. Obtaining ConfigSource configuration from a ConfigSourceProvider class You can create and configure a custom org.eclipse.microprofile.config.spi.ConfigSourceProvider implementation class that registers implementations for multiple ConfigSource instances. Procedure Create a config-source-provider : The command creates a config-source-provider for the implementation class named org.example.MyConfigSourceProvider that is provided by a JBoss Module named org.example . If you want to use a config-source-provider from the org.example module, add the <module name="org.eclipse.microprofile.config.api"/> dependency to the path/to/org/example/main/module.xml file. This command results in the following XML configuration for the microprofile-config-smallrye subsystem: <subsystem xmlns="urn:wildfly:microprofile-config-smallrye:1.0"> <config-source-provider name="my-config-source-provider"> <class name="org.example.MyConfigSourceProvider" module="org.example"/> </config-source-provider> </subsystem> Properties provided by the ConfigSourceProvider implementation are available to any JBoss EAP deployment. 3.3. MicroProfile Fault Tolerance configuration 3.3.1. Adding the MicroProfile Fault Tolerance extension The MicroProfile Fault Tolerance extension is included in standalone-microprofile.xml and standalone-microprofile-ha.xml configurations that are provided as part of JBoss EAP XP. The extension is not included in the standard standalone.xml configuration. To use the extension, you must manually enable it. Prerequisites JBoss EAP 8.0 with JBoss EAP XP 5.0 is installed. Procedure Add the MicroProfile Fault Tolerance extension using the following management CLI command: Enable the microprofile-fault-tolerance-smallrye subsystem using the following managenent command: Reload the server with the following management command: 3.4. MicroProfile Health configuration 3.4.1. Examining health using the management CLI You can check system health using the management CLI. Procedure Examine health: 3.4.2. Examining health using the management console You can check system health using the management console. A check runtime operation shows the health checks and the global outcome as boolean value. Procedure Navigate to the Runtime tab and select the server. In the Monitor column, click MicroProfile Health View . 3.4.3. Examining health using the HTTP endpoint Health check is automatically deployed to the health context on JBoss EAP, so you can obtain the current health using the HTTP endpoint. The default address for the /health endpoint, accessible from the management interface, is http://127.0.0.1:9990/health . Procedure To obtain the current health of the server using the HTTP endpoint, use the following URL:. Accessing this context displays the health check in JSON format, indicating if the server is healthy. 3.4.4. Enabling authentication for MicroProfile Health You can configure the health context to require authentication for access. Procedure Set the security-enabled attribute to true on the microprofile-health-smallrye subsystem. Reload the server for the changes to take effect. Any subsequent attempt to access the /health endpoint triggers an authentication prompt. 3.4.5. Readiness probes that determine server health and readiness JBoss EAP XP 5.0.0 supports three readiness probes to determine server health and readiness. server-status - returns UP when the server-state is running . boot-errors - returns UP when the probe detects no boot errors. deployment-status - returns UP when the status for all deployments is OK . These readiness probes are enabled by default. You can disable the probes using the MicroProfile Config property mp.health.disable-default-procedures . The following example illustrates the use of the three probes with the check operation: Additional resources MicroProfile Health in JBoss EAP Global status when probes are not defined 3.4.6. Global status when probes are not defined The :empty-readiness-checks-status , :empty-liveness-checks-status , and :empty-startup-checks-status management attributes specify the global status when no readiness , liveness , or startup probes are defined. These attributes allow applications to report 'DOWN' until their probes verify that the application is ready, live, or started up. By default, applications report 'UP'. The :empty-readiness-checks-status attribute specifies the global status for readiness probes if no readiness probes have been defined: The :empty-liveness-checks-status attribute specifies the global status for liveness probes if no liveness probes have been defined: The :empty-startup-checks-status attribute specifies the global status for startup probes if no startup probes have been defined: The /health HTTP endpoint and the :check operation that check readiness , liveness , and startup probes also take into account these attributes. You can also modify these attributes as shown in the following example: 3.5. MicroProfile JWT configuration 3.5.1. Enabling microprofile-jwt-smallrye subsystem The MicroProfile JWT integration is provided by the microprofile-jwt-smallrye subsystem and is included in the default configuration. If the subsystem is not present in the default configuration, you can add it as follows. Prerequisites JBoss EAP 8.0 with JBoss EAP XP 5.0 is installed. Procedure Enable the MicroProfile JWT smallrye extension in JBoss EAP: Enable the microprofile-jwt-smallrye subsystem: Reload the server: The microprofile-jwt-smallrye subsystem is enabled. 3.6. MicroProfile OpenAPI administration 3.6.1. Enabling MicroProfile OpenAPI The microprofile-openapi-smallrye subsystem is provided in the standalone-microprofile.xml configuration. However, JBoss EAP XP uses the standalone.xml by default. You must include the subsystem in standalone.xml to use it. Alternatively, you can follow the procedure Updating standalone configurations with MicroProfile subsystems and extensions to update the standalone.xml configuration file. Procedure Enable the MicroProfile OpenAPI smallrye extension in JBoss EAP: Enable the microprofile-openapi-smallrye subsystem using the following management command: Reload the server. The microprofile-openapi-smallrye subsystem is enabled. 3.6.2. Requesting an MicroProfile OpenAPI document using Accept HTTP header Request an MicroProfile OpenAPI document, in the JSON format, from a deployment using an Accept HTTP header. By default, the OpenAPI endpoint returns a YAML document. Prerequisites The deployment being queried is configured to return an MicroProfile OpenAPI document. Procedure Issue the following curl command to query the /openapi endpoint of the deployment: Replace http://localhost:8080 with the URL and port of the deployment. The Accept header indicates that the JSON document is to be returned using the application/json string. 3.6.3. Requesting an MicroProfile OpenAPI document using an HTTP parameter Request an MicroProfile OpenAPI document, in the JSON format, from a deployment using a query parameter in an HTTP request. By default, the OpenAPI endpoint returns a YAML document. Prerequisites The deployment being queried is configured to return an MicroProfile OpenAPI document. Procedure Issue the following curl command to query the /openapi endpoint of the deployment: Replace http://localhost:8080 with the URL and port of the deployment. The HTTP parameter format=JSON indicates that JSON document is to be returned. 3.6.4. Configuring JBoss EAP to serve a static OpenAPI document Configure JBoss EAP to serve a static OpenAPI document that describes the REST services for the host. When JBoss EAP is configured to serve a static OpenAPI document, the static OpenAPI document is processed before any Jakarta RESTful Web Services and MicroProfile OpenAPI annotations. In a production environment, disable annotation processing when serving a static document. Disabling annotation processing ensures that an immutable and versioned API contract is available for clients. Procedure Create a directory in the application source tree: APPLICATION_ROOT is the directory containing the pom.xml configuration file for the application. Query the OpenAPI endpoint, redirecting the output to a file: By default, the endpoint serves a YAML document, format=JSON specifies that a JSON document is returned. Configure the application to skip annotation scanning when processing the OpenAPI document model: Rebuild the application: Deploy the application again using the following management CLI commands: Undeploy the application: Deploy the application: JBoss EAP now serves a static OpenAPI document at the OpenAPI endpoint. 3.6.5. Disabling microprofile-openapi-smallrye You can disable the microprofile-openapi-smallrye subsystem in JBoss EAP XP using the management CLI. Procedure Disable the microprofile-openapi-smallrye subsystem: 3.7. MicroProfile Reactive Messaging administration 3.7.1. Configuring the required MicroProfile reactive messaging extension and subsystem for JBoss EAP If you want to enable asynchronous reactive messaging to your instance of JBoss EAP, you must add its extension through the JBoss EAP management CLI. Prerequisites You added the Reactive Streams Operators with SmallRye extension and subsystem. For more information, see MicroProfile Reactive Streams Operators Subsystem Configuration: Required Extension . You added the Reactive Messaging with SmallRye extension and subsystem. Procedure Open the JBoss EAP management CLI. Enter the following code: Note If you provision a server using Galleon, either on OpenShift or not, make sure you include the microprofile-reactive-messaging Galleon layer to get the core MicroProfile 2.0.1 and reactive messaging functionality, and to enable the required subsystems and extensions. Note that this configuration does not contain the JBoss EAP modules you need to enable connectors. Use the microprofile-reactive-messaging-kafka layer or the microprofile-reactive-messaging-amqp layer to enable the Kafka connector or the AMQP connector, respectively. Verification You have successfully added the required MicroProfile Reactive Messaging extension and subsystem for JBoss EAP if you see success in two places in the resulting code in the management CLI. Tip If the resulting code says reload-required , you have to reload your server configuration to completely apply all of your changes. To reload, in a standalone server CLI, enter reload . 3.8. Standalone server configuration 3.8.1. Standalone server configuration files The JBoss EAP XP includes additional standalone server configuration files, standalone-microprofile.xml and standalone-microprofile-ha.xml . Standard configuration files that are included with JBoss EAP remain unchanged. Note that JBoss EAP XP 5.0.0 does not support the use of domain.xml files or domain mode. Table 3.1. Standalone configuration files available in JBoss EAP XP Configuration File Purpose Included capabilities Excluded capabilities standalone.xml This is the default configuration that is used when you start your standalone server. Includes information about the server, including subsystems, networking, deployments, socket bindings, and other configurable details. Excludes subsystems necessary for messaging or high availability. standalone-microprofile.xml This configuration file supports applications that use MicroProfile. Includes information about the server, including subsystems, networking, deployments, socket bindings, and other configurable details. Excludes the following capabilities: Jakarta Enterprise Beans Messaging Jakarta EE Batch Jakarta Server Faces Jakarta Enterprise Beans timers standalone-ha.xml Includes default subsystems and adds the modcluster and jgroups subsystems for high availability. Excludes subsystems necessary for messaging. standalone-microprofile-ha.xml This standalone file supports applications that use MicroProfile. Includes the modcluster and jgroups subsystems for high availability in addition to default subsystems. Excludes subsystems necessary for messaging. standalone-full.xml Includes the messaging-activemq and iiop-openjdk subsystems in addition to default subsystems. standalone-full-ha.xml Support for every possible subsystem. Includes subsystems for messaging and high availability in addition to default subsystems. standalone-load-balancer.xml Support for the minimum subsystems necessary to use the built-in mod_cluster front-end load balancer to load balance other JBoss EAP instances. By default, starting JBoss EAP as a standalone server uses the standalone.xml file. To start JBoss EAP with a standalone MicroProfile configuration, use the -c argument. For example, 3.8.2. Updating standalone configurations with MicroProfile subsystems and extensions You can update standard standalone server configuration files with MicroProfile subsystems and extensions using the docs/examples/enable-microprofile.cli script. The enable-microprofile.cli script is intended as an example script for updating standard standalone server configuration files, not custom configurations. The enable-microprofile.cli script modifies the existing standalone server configuration and adds the following MicroProfile subsystems and extensions if they do not exist in the standalone configuration file: microprofile-config-smallrye microprofile-fault-tolerance-smallrye microprofile-health-smallrye microprofile-jwt-smallrye microprofile-openapi-smallrye The enable-microprofile.cli script outputs a high-level description of the modifications. The configuration is secured using the elytron subsystem. The security subsystem, if present, is removed from the configuration. Prerequisites JBoss EAP 8.0 with JBoss EAP XP 5.0 is installed. Procedure Run the following CLI script to update the default standalone.xml server configuration file: Select a standalone server configuration other than the default standalone.xml server configuration file using the following command: The specified configuration file now includes MicroProfile subsystems and extensions. | [
"<JBOSS_HOME> /bin/jboss-cli.sh -c <<EOF if (outcome != success) of /subsystem=opentelemetry:read-resource /extension=org.wildfly.extension.opentelemetry:add() /subsystem=opentelemetry:add() end-if /extension=org.wildfly.extension.microprofile.telemetry:add /subsystem=microprofile-telemetry:add reload EOF",
"otel.sdk.disabled=false",
"otel.service.name=My Application",
"/subsystem=microprofile-config-smallrye/config-source=props:add(properties={\"name\" = \"jim\"})",
"mkdir -p ~/config/prop-files/",
"cd ~/config/prop-files/",
"touch name",
"echo \"jim\" > name",
"/subsystem=microprofile-config-smallrye/config-source=file-props:add(dir={path=~/config/prop-files})",
"<subsystem xmlns=\"urn:wildfly:microprofile-config-smallrye:1.0\"> <config-source name=\"file-props\"> <dir path=\"/etc/config/prop-files\"/> </config-source> </subsystem>",
"mkdir -p ~/etc/config/prop-files/",
"cd ~/etc/config/prop-files/",
"touch name",
"echo \"jim\" > name",
"/subsystem=microprofile-config-smallrye/config-source=prop-files:add(dir={path=/etc/config, root=true})",
"<subsystem xmlns=\"urn:wildfly:microprofile-config-smallrye:2.0\"> <config-source name=\"prop-files\"> <dir path=\"/etc/config\" root=\"true\"/> </config-source> </subsystem>",
"/subsystem=microprofile-config-smallrye/config-source=my-config-source:add(class={name=org.example.MyConfigSource, module=org.example})",
"<subsystem xmlns=\"urn:wildfly:microprofile-config-smallrye:1.0\"> <config-source name=\"my-config-source\"> <class name=\"org.example.MyConfigSource\" module=\"org.example\"/> </config-source> </subsystem>",
"/subsystem=microprofile-config-smallrye/config-source-provider=my-config-source-provider:add(class={name=org.example.MyConfigSourceProvider, module=org.example})",
"<subsystem xmlns=\"urn:wildfly:microprofile-config-smallrye:1.0\"> <config-source-provider name=\"my-config-source-provider\"> <class name=\"org.example.MyConfigSourceProvider\" module=\"org.example\"/> </config-source-provider> </subsystem>",
"/extension=org.wildfly.extension.microprofile.fault-tolerance-smallrye:add",
"/subsystem=microprofile-fault-tolerance-smallrye:add",
"reload",
"/subsystem=microprofile-health-smallrye:check { \"outcome\" => \"success\", \"result\" => { \"status\" => \"UP\", \"checks\" => [] } }",
"http:// <host> : <port> /health",
"/subsystem=microprofile-health-smallrye:write-attribute(name=security-enabled,value=true)",
"reload",
"[standalone@localhost:9990 /] /subsystem=microprofile-health-smallrye:check { \"outcome\" => \"success\", \"result\" => { \"status\" => \"UP\", \"checks\" => [ { \"name\" => \"boot-errors\", \"status\" => \"UP\" }, { \"name\" => \"server-state\", \"status\" => \"UP\", \"data\" => {\"value\" => \"running\"} }, { \"name\" => \"empty-readiness-checks\", \"status\" => \"UP\" }, { \"name\" => \"deployments-status\", \"status\" => \"UP\" }, { \"name\" => \"empty-liveness-checks\", \"status\" => \"UP\" }, { \"name\" => \"empty-startup-checks\", \"status\" => \"UP\" } ] } }",
"/subsystem=microprofile-health-smallrye:read-attribute(name=empty-readiness-checks-status) { \"outcome\" => \"success\", \"result\" => expression \"USD{env.MP_HEALTH_EMPTY_READINESS_CHECKS_STATUS:UP}\" }",
"/subsystem=microprofile-health-smallrye:read-attribute(name=empty-liveness-checks-status) { \"outcome\" => \"success\", \"result\" => expression \"USD{env.MP_HEALTH_EMPTY_LIVENESS_CHECKS_STATUS:UP}\" }",
"/subsystem=microprofile-health-smallrye:read-attribute(name=empty-startup-checks-status) { \"outcome\" => \"success\", \"result\" => expression \"USD{env.MP_HEALTH_EMPTY_STARTUP_CHECKS_STATUS:UP}\" }",
"/subsystem=microprofile-health-smallrye:write-attribute(name=empty-readiness-checks-status,value=DOWN) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"/extension=org.wildfly.extension.microprofile.jwt-smallrye:add",
"/subsystem=microprofile-jwt-smallrye:add",
"reload",
"/extension=org.wildfly.extension.microprofile.openapi-smallrye:add()",
"/subsystem=microprofile-openapi-smallrye:add()",
"reload",
"curl -v -H'Accept: application/json' http://localhost:8080 /openapi < HTTP/1.1 200 OK {\"openapi\": \"3.0.1\" ... }",
"curl -v http://localhost:8080 /openapi?format=JSON < HTTP/1.1 200 OK",
"mkdir APPLICATION_ROOT /src/main/webapp/META-INF",
"curl http://localhost:8080/openapi?format=JSON > src/main/webapp/META-INF/openapi.json",
"echo \"mp.openapi.scan.disable=true\" > APPLICATION_ROOT /src/main/webapp/META-INF/microprofile-config.properties",
"mvn clean install",
"undeploy microprofile-openapi.war",
"deploy APPLICATION_ROOT /target/microprofile-openapi.war",
"/subsystem=microprofile-openapi-smallrye:remove()",
"[standalone@localhost:9990 /] /extension=org.wildfly.extension.microprofile.reactive-messaging-smallrye:add {\"outcome\" => \"success\"} [standalone@localhost:9990 /] /subsystem=microprofile-reactive-messaging-smallrye:add { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"<EAP_HOME> /bin/standalone.sh -c=standalone-microprofile.xml",
"<EAP_HOME> /bin/jboss-cli.sh --file=docs/examples/enable-microprofile.cli",
"<EAP_HOME> /bin/jboss-cli.sh --file=docs/examples/enable-microprofile.cli -Dconfig=<standalone-full.xml|standalone-ha.xml|standalone-full-ha.xml>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_xp_5.0/administer_microprofile_in_jboss_eap |
Chapter 5. Exposing the registry | Chapter 5. Exposing the registry By default, the OpenShift image registry is secured during cluster installation so that it serves traffic through TLS. Unlike versions of OpenShift Container Platform, the registry is not exposed outside of the cluster at the time of installation. 5.1. Exposing a default registry manually Instead of logging in to the default OpenShift image registry from within the cluster, you can gain external access to it by exposing it with a route. This external access enables you to log in to the registry from outside the cluster using the route address and to tag and push images to an existing project by using the route host. Prerequisites The following prerequisites are automatically performed: Deploy the Registry Operator. Deploy the Ingress Operator. You have access to the cluster as a user with the cluster-admin role. Procedure You can expose the route by using the defaultRoute parameter in the configs.imageregistry.operator.openshift.io resource. To expose the registry using the defaultRoute : Set defaultRoute to true by running the following command: USD oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge Get the default registry route by running the following command: USD HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') Get the certificate of the Ingress Operator by running the following command: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm Move the extracted certificate to the system's trusted CA directory by running the following command: USD sudo mv tls.crt /etc/pki/ca-trust/source/anchors/ Enable the cluster's default certificate to trust the route by running the following command: USD sudo update-ca-trust enable Log in with podman using the default route by running the following command: USD sudo podman login -u kubeadmin -p USD(oc whoami -t) USDHOST 5.2. Exposing a secure registry manually Instead of logging in to the OpenShift image registry from within the cluster, you can gain external access to it by exposing it with a route. This allows you to log in to the registry from outside the cluster using the route address, and to tag and push images to an existing project by using the route host. Prerequisites The following prerequisites are automatically performed: Deploy the Registry Operator. Deploy the Ingress Operator. You have access to the cluster as a user with the cluster-admin role. Procedure You can expose the route by using DefaultRoute parameter in the configs.imageregistry.operator.openshift.io resource or by using custom routes. To expose the registry using DefaultRoute : Set DefaultRoute to True : USD oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge Log in with podman : USD HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') USD podman login -u kubeadmin -p USD(oc whoami -t) --tls-verify=false USDHOST 1 1 --tls-verify=false is needed if the cluster's default certificate for routes is untrusted. You can set a custom, trusted certificate as the default certificate with the Ingress Operator. To expose the registry using custom routes: Create a secret with your route's TLS keys: USD oc create secret tls public-route-tls \ -n openshift-image-registry \ --cert=</path/to/tls.crt> \ --key=</path/to/tls.key> This step is optional. If you do not create a secret, the route uses the default TLS configuration from the Ingress Operator. On the Registry Operator: USD oc edit configs.imageregistry.operator.openshift.io/cluster spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls ... Note Only set secretName if you are providing a custom TLS configuration for the registry's route. Troubleshooting Error creating TLS secret | [
"oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge",
"HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"sudo mv tls.crt /etc/pki/ca-trust/source/anchors/",
"sudo update-ca-trust enable",
"sudo podman login -u kubeadmin -p USD(oc whoami -t) USDHOST",
"oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge",
"HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')",
"podman login -u kubeadmin -p USD(oc whoami -t) --tls-verify=false USDHOST 1",
"oc create secret tls public-route-tls -n openshift-image-registry --cert=</path/to/tls.crt> --key=</path/to/tls.key>",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/registry/securing-exposing-registry |
Chapter 4. cxf | Chapter 4. cxf 4.1. cxf:list-busses 4.1.1. Description Lists all CXF Busses. 4.1.2. Syntax cxf:list-busses [options] 4.1.3. Options Name Description --help Display this help message --no-format Disable table rendered output 4.2. cxf:list-endpoints 4.2.1. Description Lists all CXF Endpoints on a Bus. 4.2.2. Syntax cxf:list-endpoints [options] [bus] 4.2.3. Arguments Name Description bus The CXF bus name where to look for the Endpoints 4.2.4. Options Name Description --help Display this help message -f, --fulladdress Display full address of an endpoint --no-format Disable table rendered output 4.3. cxf:start-endpoint 4.3.1. Description Starts a CXF Endpoint on a Bus. 4.3.2. Syntax cxf:start-endpoint [options] bus endpoint 4.3.3. Arguments Name Description bus The CXF bus name where to look for the Endpoint endpoint The Endpoint name to start 4.3.4. Options Name Description --help Display this help message 4.4. cxf:stop-endpoint 4.4.1. Description Stops a CXF Endpoint on a Bus. 4.4.2. Syntax cxf:stop-endpoint [options] bus endpoint 4.4.3. Arguments Name Description bus The CXF bus name where to look for the Endpoint endpoint The Endpoint name to stop 4.4.4. Options Name Description --help Display this help message | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_console_reference/cxf |
Chapter 7. Using SOAP 1.2 Messages | Chapter 7. Using SOAP 1.2 Messages Abstract Apache CXF provides tools to generate a SOAP 1.2 binding which does not use any SOAP headers. You can add SOAP headers to your binding using any text or XML editor. 7.1. Adding a SOAP 1.2 Binding to a WSDL Document Using wsdl2soap Note To use wsdl2soap you will need to download the Apache CXF distribution. To generate a SOAP 1.2 binding using wsdl2soap use the following command: wsdl2soap -i port-type-name -b binding-name -soap12-d output-directory -o output-file -n soap-body-namespace -style (document/rpc)-use (literal/encoded)-v-verbose-quiet wsdlurl The tool has the following required arguments: Option Interpretation -i port-type-name Specifies the portType element for which a binding is generated. -soap12 Specifies that the generated binding uses SOAP 1.2. wsdlurl The path and name of the WSDL file containing the portType element definition. The tool has the following optional arguments: Option Interpretation -b binding-name Specifies the name of the generated SOAP binding. -soap12 Specifies that the generated binding will use SOAP 1.2. -d output-directory Specifies the directory to place the generated WSDL file. -o output-file Specifies the name of the generated WSDL file. -n soap-body-namespace Specifies the SOAP body namespace when the style is RPC. -style (document/rpc) Specifies the encoding style (document or RPC) to use in the SOAP binding. The default is document. -use (literal/encoded) Specifies the binding use (encoded or literal) to use in the SOAP binding. The default is literal. -v Displays the version number for the tool. -verbose Displays comments during the code generation process. -quiet Suppresses comments during the code generation process. The -i port-type-name and wsdlurl arguments are required. If the -style rpc argument is specified, the -n soap-body-namspace argument is also required. All other arguments are optional and can be listed in any order. Important wsdl2soap does not support the generation of document/encoded SOAP 1.2 bindings. Example If your system has an interface that takes orders and offers a single operation to process the orders it is defined in a WSDL fragment similar to the one shown in Example 7.1, "Ordering System Interface" . Example 7.1. Ordering System Interface The SOAP binding generated for orderWidgets is shown in Example 7.2, "SOAP 1.2 Binding for orderWidgets" . Example 7.2. SOAP 1.2 Binding for orderWidgets This binding specifies that messages are sent using the document/literal message style. 7.2. Adding Headers to a SOAP 1.2 Message Overview SOAP message headers are defined by adding soap12:header elements to your SOAP 1.2 message. The soap12:header element is an optional child of the input , output , and fault elements of the binding. The SOAP header becomes part of the parent message. A SOAP header is defined by specifying a message and a message part. Each SOAP header can only contain one message part, but you can insert as many headers as needed. Syntax The syntax for defining a SOAP header is shown in Example 7.3, "SOAP Header Syntax" . Example 7.3. SOAP Header Syntax The soap12:header element's attributes are described in Table 7.1, " soap12:header Attributes" . Table 7.1. soap12:header Attributes Attribute Description message A required attribute specifying the qualified name of the message from which the part being inserted into the header is taken. part A required attribute specifying the name of the message part inserted into the SOAP header. use Specifies if the message parts are to be encoded using encoding rules. If set to encoded the message parts are encoded using the encoding rules specified by the value of the encodingStyle attribute. If set to literal , the message parts are defined by the schema types referenced. encodingStyle Specifies the encoding rules used to construct the message. namespace Defines the namespace to be assigned to the header element serialized with use="encoded" . Splitting messages between body and header The message part inserted into the SOAP header can be any valid message part from the contract. It can even be a part from the parent message which is being used as the SOAP body. Because it is unlikely that you would send information twice in the same message, the SOAP 1.2 binding provides a means for specifying the message parts that are inserted into the SOAP body. The soap12:body element has an optional attribute, parts , that takes a space delimited list of part names. When parts is defined, only the message parts listed are inserted into the body of the SOAP 1.2 message. You can then insert the remaining parts into the message's header. Note When you define a SOAP header using parts of the parent message, Apache CXF automatically fills in the SOAP headers for you. Example Example 7.4, "SOAP 1.2 Binding with a SOAP Header" shows a modified version of the orderWidgets service shown in Example 7.1, "Ordering System Interface" . This version is modified so that each order has an xsd:base64binary value placed in the header of the request and the response. The header is defined as being the keyVal part from the widgetKey message. In this case you are responsible for adding the application logic to create the header because it is not part of the input or output message. Example 7.4. SOAP 1.2 Binding with a SOAP Header You can modify Example 7.4, "SOAP 1.2 Binding with a SOAP Header" so that the header value is a part of the input and output messages, as shown in Example 7.5, "SOAP 1.2 Binding for orderWidgets with a SOAP Header" . In this case keyVal is a part of the input and output messages. In the soap12:body elements the parts attribute specifies that keyVal should not be inserted into the body. However, it is inserted into the header. Example 7.5. SOAP 1.2 Binding for orderWidgets with a SOAP Header | [
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <definitions name=\"widgetOrderForm.wsdl\" targetNamespace=\"http://widgetVendor.com/widgetOrderForm\" xmlns=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap12=\"http://schemas.xmlsoap.org/wsdl/soap12/\" xmlns:tns=\"http://widgetVendor.com/widgetOrderForm\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsd1=\"http://widgetVendor.com/types/widgetTypes\" xmlns:SOAP-ENC=\"http://schemas.xmlsoap.org/soap/encoding/\"> <message name=\"widgetOrder\"> <part name=\"numOrdered\" type=\"xsd:int\"/> </message> <message name=\"widgetOrderBill\"> <part name=\"price\" type=\"xsd:float\"/> </message> <message name=\"badSize\"> <part name=\"numInventory\" type=\"xsd:int\"/> </message> <portType name=\"orderWidgets\"> <operation name=\"placeWidgetOrder\"> <input message=\"tns:widgetOrder\" name=\"order\"/> <output message=\"tns:widgetOrderBill\" name=\"bill\"/> <fault message=\"tns:badSize\" name=\"sizeFault\"/> </operation> </portType> </definitions>",
"<binding name=\"orderWidgetsBinding\" type=\"tns:orderWidgets\"> <soap12:binding style=\"document\" transport=\"http://schemas.xmlsoap.org/soap/http\"/> <operation name=\"placeWidgetOrder\"> <soap12:operation soapAction=\"\" style=\"document\"/> <input name=\"order\"> <soap12:body use=\"literal\"/> </input> <output name=\"bill\"> <wsoap12:body use=\"literal\"/> </output> <fault name=\"sizeFault\"> <soap12:body use=\"literal\"/> </fault> </operation> </binding>",
"<binding name=\"headwig\"> <soap12:binding style=\"document\" transport=\"http://schemas.xmlsoap.org/soap/http\"/> <operation name=\"weave\"> <soap12:operation soapAction=\"\" style=\"documment\"/> <input name=\"grain\"> <soap12:body ... /> <soap12:header message=\" QName \" part=\" partName \" use=\"literal|encoded\" encodingStyle=\" encodingURI \" namespace=\" namespaceURI \" /> </input> </binding>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <definitions name=\"widgetOrderForm.wsdl\" targetNamespace=\"http://widgetVendor.com/widgetOrderForm\" xmlns=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap12=\"http://schemas.xmlsoap.org/wsdl/soap12/\" xmlns:tns=\"http://widgetVendor.com/widgetOrderForm\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsd1=\"http://widgetVendor.com/types/widgetTypes\" xmlns:SOAP-ENC=\"http://schemas.xmlsoap.org/soap/encoding/\"> <types> <schema targetNamespace=\"http://widgetVendor.com/types/widgetTypes\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\"> <element name=\"keyElem\" type=\"xsd:base64Binary\"/> </schema> </types> <message name=\"widgetOrder\"> <part name=\"numOrdered\" type=\"xsd:int\"/> </message> <message name=\"widgetOrderBill\"> <part name=\"price\" type=\"xsd:float\"/> </message> <message name=\"badSize\"> <part name=\"numInventory\" type=\"xsd:int\"/> </message> <message name=\"widgetKey\"> <part name=\"keyVal\" element=\"xsd1:keyElem\"/> </message> <portType name=\"orderWidgets\"> <operation name=\"placeWidgetOrder\"> <input message=\"tns:widgetOrder\" name=\"order\"/> <output message=\"tns:widgetOrderBill\" name=\"bill\"/> <fault message=\"tns:badSize\" name=\"sizeFault\"/> </operation> </portType> <binding name=\"orderWidgetsBinding\" type=\"tns:orderWidgets\"> <soap12:binding style=\"document\" transport=\"http://schemas.xmlsoap.org/soap/http\"/> <operation name=\"placeWidgetOrder\"> <soap12:operation soapAction=\"\" style=\"document\"/> <input name=\"order\"> <soap12:body use=\"literal\"/> <soap12:header message=\"tns:widgetKey\" part=\"keyVal\"/> </input> <output name=\"bill\"> <soap12:body use=\"literal\"/> <soap12:header message=\"tns:widgetKey\" part=\"keyVal\"/> </output> <fault name=\"sizeFault\"> <soap12:body use=\"literal\"/> </fault> </operation> </binding> </definitions>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <definitions name=\"widgetOrderForm.wsdl\" targetNamespace=\"http://widgetVendor.com/widgetOrderForm\" xmlns=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap12=\"http://schemas.xmlsoap.org/wsdl/soap12/\" xmlns:tns=\"http://widgetVendor.com/widgetOrderForm\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsd1=\"http://widgetVendor.com/types/widgetTypes\" xmlns:SOAP-ENC=\"http://schemas.xmlsoap.org/soap/encoding/\"> <types> <schema targetNamespace=\"http://widgetVendor.com/types/widgetTypes\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\"> <element name=\"keyElem\" type=\"xsd:base64Binary\"/> </schema> </types> <message name=\"widgetOrder\"> <part name=\"numOrdered\" type=\"xsd:int\"/> <part name=\"keyVal\" element=\"xsd1:keyElem\"/> </message> <message name=\"widgetOrderBill\"> <part name=\"price\" type=\"xsd:float\"/> <part name=\"keyVal\" element=\"xsd1:keyElem\"/> </message> <message name=\"badSize\"> <part name=\"numInventory\" type=\"xsd:int\"/> </message> <portType name=\"orderWidgets\"> <operation name=\"placeWidgetOrder\"> <input message=\"tns:widgetOrder\" name=\"order\"/> <output message=\"tns:widgetOrderBill\" name=\"bill\"/> <fault message=\"tns:badSize\" name=\"sizeFault\"/> </operation> </portType> <binding name=\"orderWidgetsBinding\" type=\"tns:orderWidgets\"> <soap12:binding style=\"document\" transport=\"http://schemas.xmlsoap.org/soap/http\"/> <operation name=\"placeWidgetOrder\"> <soap12:operation soapAction=\"\" style=\"document\"/> <input name=\"order\"> <soap12:body use=\"literal\" parts=\"numOrdered\"/> <soap12:header message=\"tns:widgetOrder\" part=\"keyVal\"/> </input> <output name=\"bill\"> <soap12:body use=\"literal\" parts=\"bill\"/> <soap12:header message=\"tns:widgetOrderBill\" part=\"keyVal\"/> </output> <fault name=\"sizeFault\"> <soap12:body use=\"literal\"/> </fault> </operation> </binding> </definitions>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/FUSECXFSOAP12 |
3.2. SystemTap Scripts | 3.2. SystemTap Scripts For the most part, SystemTap scripts are the foundation of each SystemTap session. SystemTap scripts instruct SystemTap on what type of information to collect, and what to do once that information is collected. As stated in Chapter 3, Understanding How SystemTap Works , SystemTap scripts are made up of two components: events and handlers . Once a SystemTap session is underway, SystemTap monitors the operating system for the specified events and executes the handlers as they occur. Note An event and its corresponding handler is collectively called a probe . A SystemTap script can have multiple probes. A probe's handler is commonly referred to as a probe body . In terms of application development, using events and handlers is similar to instrumenting the code by inserting diagnostic print statements in a program's sequence of commands. These diagnostic print statements allow you to view a history of commands executed once the program is run. SystemTap scripts allow insertion of the instrumentation code without recompilation of the code and allows more flexibility with regard to handlers. Events serve as the triggers for handlers to run; handlers can be specified to record specified data and print it in a certain manner. Format SystemTap scripts use the .stp file extension and contains probes written in the following format: SystemTap supports multiple events per probe; multiple events are delimited by a comma ( , ). If multiple events are specified in a single probe, SystemTap executes the handler when any of the specified events occurs. Each probe has a corresponding statement block . This statement block is enclosed in braces ( { } ) and contains the statements to be executed per event. SystemTap executes these statements in sequence; special separators or terminators are generally not necessary between multiple statements. Note Statement blocks in SystemTap scripts follow the same syntax and semantics as the C programming language. A statement block can be nested within another statement block. Systemtap allows you to write functions to factor out code to be used by a number of probes. Thus, rather than repeatedly writing the same series of statements in multiple probes, you can just place the instructions in a function , as in: The statements in function_name are executed when the probe for event executes. The arguments are optional values passed into the function. Important Section 3.2, "SystemTap Scripts" is designed to introduce readers to the basics of SystemTap scripts. To understand SystemTap scripts better, it is advisable that you see Chapter 4, Useful SystemTap Scripts ; each section therein provides a detailed explanation of the script, its events, handlers, and expected output. 3.2.1. Event SystemTap events can be broadly classified into two types: synchronous and asynchronous . Synchronous Events A synchronous event occurs when any process executes an instruction at a particular location in kernel code. This gives other events a reference point from which more contextual data may be available. Examples of synchronous events include: syscall. system_call The entry to the system call system_call . If the exit from a syscall is desired, appending a .return to the event monitor the exit of the system call instead. For example, to specify the entry and exit of the close system call, use syscall.close and syscall.close.return respectively. vfs. file_operation The entry to the file_operation event for Virtual File System (VFS). Similar to syscall event, appending a .return to the event monitors the exit of the file_operation operation. kernel.function(" function ") The entry to the function kernel function. For example, kernel.function("sys_open") refers to the event that occurs when the sys_open kernel function is called by any thread in the system. To specify the return of the sys_open kernel function, append the return string to the event statement; that is, kernel.function("sys_open").return . When defining probe events, you can use asterisk ( * ) for wildcards. You can also trace the entry or exit of a function in a kernel source file. Consider the following example: Example 3.1. wildcards.stp probe kernel.function("*@net/socket.c") { } probe kernel.function("*@net/socket.c").return { } In the example, the first probe's event specifies the entry of ALL functions in the net/socket.c kernel source file. The second probe specifies the exit of all those functions. Note that in this example, there are no statements in the handler; as such, no information will be collected or displayed. kernel.trace(" tracepoint ") The static probe for tracepoint . Recent kernels (2.6.30 and newer) include instrumentation for specific events in the kernel. These events are statically marked with tracepoints. One example of a tracepoint available in SystemTap is kernel.trace("kfree_skb") , which indicates each time a network buffer is freed in the kernel. module(" module ").function(" function ") Allows you to probe functions within modules. For example: Example 3.2. moduleprobe.stp probe module("ext3").function("*") { } probe module("ext3").function("*").return { } The first probe in Example 3.2, "moduleprobe.stp" points to the entry of all functions for the ext3 module. The second probe points to the exits of all functions for that same module; the use of the .return suffix is similar to kernel.function() . Note that the probes in Example 3.2, "moduleprobe.stp" do not contain statements in the probe handlers, and as such will not print any useful data (as in Example 3.1, "wildcards.stp" ). A system's kernel modules are typically located in /lib/modules/ kernel_version , where kernel_version refers to the currently loaded kernel version. Modules use the file name extension .ko . Asynchronous Events Asynchronous events are not tied to a particular instruction or location in code. This family of probe points consists mainly of counters, timers, and similar constructs. Examples of asynchronous events include: begin The startup of a SystemTap session; that is, as soon as the SystemTap script is run. end The end of a SystemTap session. timer events An event that specifies a handler to be executed periodically. For example: Example 3.3. timer-s.stp probe timer.s(4) { printf("hello world\n") } Example 3.3, "timer-s.stp" is an example of a probe that prints hello world every four seconds. Note that you can also use the following timer events: timer.ms( milliseconds ) timer.us( microseconds ) timer.ns( nanoseconds ) timer.hz( hertz ) timer.jiffies( jiffies ) When used in conjunction with other probes that collect information, timer events allows you to print periodic updates and see how that information changes over time. Important SystemTap supports the use of a large collection of probe events. For more information about supported events, see the stapprobes (3) manual page. The SEE ALSO section of stapprobes (3) also contains links to other manual pages that discuss supported events for specific subsystems and components. 3.2.2. Systemtap Handler/Body Consider the following sample script: Example 3.4. helloworld.stp probe begin { printf ("hello world\n") exit () } In Example 3.4, "helloworld.stp" , the begin event (the start of the session) triggers the handler enclosed in { } , which simply prints hello world followed by a new line, then exits. Note SystemTap scripts continue to run until the exit() function executes. If the users wants to stop the execution of the script, it can interrupted manually with Ctrl + C . printf ( ) Statements The printf() statement is one of the simplest functions for printing data. printf() can also be used to display data using many SystemTap functions in the following format: printf (" format string \n", arguments ) The format string specifies how arguments should be printed. The format string of Example 3.4, "helloworld.stp" simply instructs SystemTap to print hello world and contains no format specifiers. You can use the format specifiers %s (for strings) and %d (for numbers) in format strings, depending on your list of arguments. Format strings can have multiple format specifiers, each matching a corresponding argument; multiple arguments are delimited by a comma ( , ). Note Semantically, the SystemTap printf function is very similar to its C language counterpart. The aforementioned syntax and format for SystemTap's printf function is identical to that of the C-style printf . To illustrate this, consider the following probe example: Example 3.5. variables-in-printf-statements.stp probe syscall.open { printf ("%s(%d) open\n", execname(), pid()) } Example 3.5, "variables-in-printf-statements.stp" instructs SystemTap to probe all entries to the system call open ; for each event, it prints the current execname() (a string with the executable name) and pid() (the current process ID number), followed by the word open . A snippet of this probe's output would look like: SystemTap Functions SystemTap supports many functions that can be used as printf() arguments. Example 3.5, "variables-in-printf-statements.stp" uses the SystemTap functions execname() (name of the process that called a kernel function/performed a system call) and pid() (current process ID). The following is a list of commonly-used SystemTap functions: tid() The ID of the current thread. uid() The ID of the current user. cpu() The current CPU number. gettimeofday_s() The number of seconds since UNIX epoch (January 1, 1970). ctime() Convert number of seconds since UNIX epoch to date. pp() A string describing the probe point currently being handled. thread_indent() This particular function is quite useful, providing you with a way to better organize your print results. The function takes one argument, an indentation delta, which indicates how many spaces to add or remove from a thread's "indentation counter". It then returns a string with some generic trace data along with an appropriate number of indentation spaces. The generic data included in the returned string includes a timestamp (number of microseconds since the first call to thread_indent() by the thread), a process name, and the thread ID. This allows you to identify what functions were called, who called them, and the duration of each function call. If call entries and exits immediately precede each other, it is easy to match them. However, in most cases, after a first function call entry is made, several other call entries and exits may be made before the first call exits. The indentation counter helps you match an entry with its corresponding exit by indenting the function call if it is not the exit of the one. Consider the following example on the use of thread_indent() : Example 3.6. thread_indent.stp probe kernel.function("*@net/socket.c") { printf ("%s -> %s\n", thread_indent(1), probefunc()) } probe kernel.function("*@net/socket.c").return { printf ("%s <- %s\n", thread_indent(-1), probefunc()) } Example 3.6, "thread_indent.stp" prints out the thread_indent() and probe functions at each event in the following format: This sample output contains the following information: The time (in microseconds) since the initial thread_indent() call for the thread. The process name (and its corresponding ID) that made the function call. An arrow signifying whether the call was an entry ( <- ) or an exit ( -> ); the indentations help you match specific function call entries with their corresponding exits. The name of the function called by the process. name Identifies the name of a specific system call. This variable can only be used in probes that use the event syscall. system_call . target() Used in conjunction with either of the following two commands: stap script -x process ID stap script -c command If you want to specify a script to take an argument of a process ID or command, use target() as the variable in the script to refer to it. For example: Example 3.7. targetexample.stp probe syscall.* { if (pid() == target()) printf("%s/n", name) } When Example 3.7, "targetexample.stp" is run with the argument -x process ID , it watches all system calls (as specified by the syscall.* event) and prints out the name of all system calls made by the specified process. This has the same effect as specifying if (pid() == process ID ) each time you wish to target a specific process. However, using target() makes it easier to re-use the script, giving you the ability to simply pass a process ID as an argument each time you wish to run the script. For example: stap targetexample.stp -x process ID For more information about supported SystemTap functions, see stapfuncs (3) . | [
"probe event { statements }",
"function function_name ( arguments ){ statements } probe event { function_name ( arguments )}",
"probe kernel.function(\"*@net/socket.c\") { } probe kernel.function(\"*@net/socket.c\").return { }",
"probe module(\"ext3\").function(\"*\") { } probe module(\"ext3\").function(\"*\").return { }",
"probe timer.s(4) { printf(\"hello world\\n\") }",
"probe begin { printf (\"hello world\\n\") exit () }",
"printf (\" format string \\n\", arguments )",
"probe syscall.open { printf (\"%s(%d) open\\n\", execname(), pid()) }",
"vmware-guestd(2206) open hald(2360) open hald(2360) open hald(2360) open df(3433) open df(3433) open df(3433) open hald(2360) open",
"probe kernel.function(\"*@net/socket.c\") { printf (\"%s -> %s\\n\", thread_indent(1), probefunc()) } probe kernel.function(\"*@net/socket.c\").return { printf (\"%s <- %s\\n\", thread_indent(-1), probefunc()) }",
"0 ftp(7223): -> sys_socketcall 1159 ftp(7223): -> sys_socket 2173 ftp(7223): -> __sock_create 2286 ftp(7223): -> sock_alloc_inode 2737 ftp(7223): <- sock_alloc_inode 3349 ftp(7223): -> sock_alloc 3389 ftp(7223): <- sock_alloc 3417 ftp(7223): <- __sock_create 4117 ftp(7223): -> sock_create 4160 ftp(7223): <- sock_create 4301 ftp(7223): -> sock_map_fd 4644 ftp(7223): -> sock_map_file 4699 ftp(7223): <- sock_map_file 4715 ftp(7223): <- sock_map_fd 4732 ftp(7223): <- sys_socket 4775 ftp(7223): <- sys_socketcall",
"probe syscall.* { if (pid() == target()) printf(\"%s/n\", name) }"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_beginners_guide/scripts |
3.3. build-id Unique Identification of Binaries | 3.3. build-id Unique Identification of Binaries Each executable or shared library built with Red Hat Enterprise Linux Server 6 or later is assigned a unique identification 160-bit SHA-1 string, generated as a checksum of selected parts of the binary. This allows two builds of the same program on the same host to always produce consistent build-ids and binary content. Display the build-id of a binary with the following command: Unique identificators of binaries are useful in cases such as analysing core files, documented Section 4.2.1, "Installing Debuginfo Packages for Core Files Analysis" . | [
"eu-readelf -n /bin/bash [...] Note section [ 3] '.note.gnu.build-id' of 36 bytes at offset 0x274: Owner Data size Type GNU 20 GNU_BUILD_ID Build ID: efdd0b5e69b0742fa5e5bad0771df4d1df2459d1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/compiling-build-id |
Chapter 19. Authenticating KIE Server through RH-SSO | Chapter 19. Authenticating KIE Server through RH-SSO KIE Server provides a REST API for third-party clients. If you integrate KIE Server with RH-SSO, you can delegate third-party client identity management to the RH-SSO server. After you create a realm client for Red Hat Process Automation Manager and set up the RH-SSO client adapter for Red Hat JBoss EAP, you can set up RH-SSO authentication for KIE Server. Prerequisites RH-SSO is installed as described in Chapter 16, Installing and configuring RH-SSO . At least one user with the kie-server role has been added to RH-SSO as described in Section 17.1, "Adding Red Hat Process Automation Manager users" . KIE Server is installed in a Red Hat JBoss EAP 7.4 instance, as described in Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 . This chapter contains the following sections: Section 19.1, "Creating the KIE Server client on RH-SSO" Section 19.2, "Installing and configuring KIE Server with the client adapter" Section 19.3, "KIE Server token-based authentication" Note Except for Section 19.1, "Creating the KIE Server client on RH-SSO" , this section is intended for standalone installations. If you are integrating RH-SSO and Red Hat Process Automation Manager on Red Hat OpenShift Container Platform, complete the steps in Section 19.1, "Creating the KIE Server client on RH-SSO" and then deploy the Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform. For information about deploying Red Hat Process Automation Manager on Red Hat OpenShift Container Platform, see Deploying Red Hat Process Automation Manager on Red Hat OpenShift Container Platform . 19.1. Creating the KIE Server client on RH-SSO Use the RH-SSO Admin Console to create a KIE Server client in an existing realm. Prerequisites KIE Server is installed in a Red Hat JBoss EAP 7.4 server, as described in Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 . RH-SSO is installed as described in Chapter 16, Installing and configuring RH-SSO . At least one user with the kie-server role has been added to RH-SSO as described in Section 17.1, "Adding Red Hat Process Automation Manager users" . Procedure In the RH-SSO Admin Console, open the security realm that you created in Chapter 16, Installing and configuring RH-SSO . Click Clients and click Create . The Add Client page opens. On the Add Client page, provide the required information to create a KIE Server client for your realm, then click Save . For example: Client ID : kie-execution-server Root URL : http:// localhost :8080/kie-server Client protocol : openid-connect Note If you are configuring RH-SSO with Red Hat OpenShift Container Platform, enter the URL that is exposed by the KIE Server routes. Your OpenShift administrator can provide this URL if necessary. The new client Access Type is set to public by default. Change it to confidential and click Save again. Navigate to the Credentials tab and copy the secret key. The secret key is required to configure the kie-execution-server client. Note The RH-SSO server client uses one URL to a single KIE Server deployment. The following error message might be displayed if there are two or more deployment configurations: We are sorry... Invalid parameter: redirect_uri To resolve this error, append /* to the Valid Redirect URIs field in the client configuration. On the Configure page, go to Clients > kie-execution-server > Settings , and append the Valid Redirect URIs field with /* , for example: 19.2. Installing and configuring KIE Server with the client adapter After you install RH-SSO, you must install the RH-SSO client adapter for Red Hat JBoss EAP and configure it for KIE Server. Prerequisites KIE Server is installed in a Red Hat JBoss EAP 7.4 server, as described in Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 . RH-SSO is installed as described in Chapter 16, Installing and configuring RH-SSO . At least one user with the kie-server role has been added to RH-SSO as described in Section 17.1, "Adding Red Hat Process Automation Manager users" . Note If you deployed KIE Server to a different application server than Business Central, install and configure RH-SSO on your second server as well. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required) and then select the product and version from the drop-down options: Product: Red Hat Single Sign-On Version: 7.5 Download Red Hat Single Sign-On 7.5 Client Adapter for JBoss EAP 7 ( rh-sso-7.5.0-eap7-adapter.zip or the latest version). Extract and install the adapter zip file. For installation instructions, see the "JBoss EAP Adapter" section of the Red Hat Single Sign On Securing Applications and Services Guide . Go to EAP_HOME /standalone/configuration and open the standalone-full.xml file. Delete the <single-sign-on/> element from both of the files. Navigate to EAP_HOME /standalone/configuration directory in your Red Hat JBoss EAP installation and edit the standalone-full.xml file to add the RH-SSO subsystem configuration. For example: Navigate to EAP_HOME /standalone/configuration in your Red Hat JBoss EAP installation and edit the standalone-full.xml file to add the RH-SSO subsystem configuration. For example: <subsystem xmlns="urn:jboss:domain:keycloak:1.1"> <secure-deployment name="kie-server.war"> <realm>demo</realm> <realm-public-key>MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrVrCuTtArbgaZzL1hvh0xtL5mc7o0NqPVnYXkLvgcwiC3BjLGw1tGEGoJaXDuSaRllobm53JBhjx33UNv+5z/UMG4kytBWxheNVKnL6GgqlNabMaFfPLPCF8kAgKnsi79NMo+n6KnSY8YeUmec/p2vjO2NjsSAVcWEQMVhJ31LwIDAQAB</realm-public-key> <auth-server-url>http://localhost:8180/auth</auth-server-url> <ssl-required>external</ssl-required> <resource>kie-execution-server</resource> <enable-basic-auth>true</enable-basic-auth> <credential name="secret">03c2b267-7f64-4647-8566-572be673f5fa</credential> <principal-attribute>preferred_username</principal-attribute> </secure-deployment> </subsystem> <system-properties> <property name="org.kie.server.sync.deploy" value="false"/> </system-properties> In this example: secure-deployment name is the name of your application WAR file. realm is the name of the realm that you created for the applications to use. realm-public-key is the public key of the realm you created. You can find the key in the Keys tab in the Realm settings page of the realm you created in the RH-SSO Admin Console. If you do not provide a value for this public key, the server retrieves it automatically. auth-server-url is the URL for the RH-SSO authentication server. resource is the name for the server client that you created. enable-basic-auth is the setting to enable basic authentication mechanism, so that the clients can use both token-based and basic authentication approaches to perform the requests. credential name is the secret key of the server client you created. You can find the key in the Credentials tab on the Clients page of the RH-SSO Admin Console. principal-attribute is the attribute for displaying the user name in the application. If you do not provide this value, your User Id is displayed in the application instead of your user name. Save your configuration changes. Use the following command to restart the Red Hat JBoss EAP server and run KIE Server. For example: When KIE Server is running, enter the following command to check the server status, where <KIE_SERVER_USER> is a user with the kie-server role and <PASSWORD> is the password for that user: 19.3. KIE Server token-based authentication You can also use token-based authentication for communication between Red Hat Process Automation Manager and KIE Server. You can use the complete token as a system property of your application server, instead of the user name and password, for your applications. However, you must ensure that the token does not expire while the applications are interacting because the token is not automatically refreshed. To get the token, see Section 20.2, "Token-based authentication" . Procedure To configure Business Central to manage KIE Server using tokens: Set the org.kie.server.token property. Make sure that the org.kie.server.user and org.kie.server.pwd properties are not set. Red Hat Process Automation Manager will then use the Authorization: Bearer USDTOKEN authentication method. To use the REST API using the token-based authentication: Set the org.kie.server.controller.token property. Make sure that the org.kie.server.controller.user and org.kie.server.controller.pwd properties are not set. Note Because KIE Server is unable to refresh the token, use a high-lifespan token. A token's lifespan must not exceed January 19, 2038. Check with your security best practices to see whether this is a suitable solution for your environment. | [
"http://localhost:8080/kie-server/*",
"<subsystem xmlns=\"urn:jboss:domain:keycloak:1.1\"> <secure-deployment name=\"kie-server.war\"> <realm>demo</realm> <realm-public-key>MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrVrCuTtArbgaZzL1hvh0xtL5mc7o0NqPVnYXkLvgcwiC3BjLGw1tGEGoJaXDuSaRllobm53JBhjx33UNv+5z/UMG4kytBWxheNVKnL6GgqlNabMaFfPLPCF8kAgKnsi79NMo+n6KnSY8YeUmec/p2vjO2NjsSAVcWEQMVhJ31LwIDAQAB</realm-public-key> <auth-server-url>http://localhost:8180/auth</auth-server-url> <ssl-required>external</ssl-required> <resource>kie-execution-server</resource> <enable-basic-auth>true</enable-basic-auth> <credential name=\"secret\">03c2b267-7f64-4647-8566-572be673f5fa</credential> <principal-attribute>preferred_username</principal-attribute> </secure-deployment> </subsystem> <system-properties> <property name=\"org.kie.server.sync.deploy\" value=\"false\"/> </system-properties>",
"EXEC_SERVER_HOME/bin/standalone.sh -c standalone-full.xml -Dorg.kie.server.id=<ID> -Dorg.kie.server.user=<USER> -Dorg.kie.server.pwd=<PWD> -Dorg.kie.server.location=<LOCATION_URL> -Dorg.kie.server.controller=<CONTROLLER_URL> -Dorg.kie.server.controller.user=<CONTROLLER_USER> -Dorg.kie.server.controller.pwd=<CONTOLLER_PASSWORD>",
"EXEC_SERVER_HOME/bin/standalone.sh -c standalone-full.xml -Dorg.kie.server.id=kieserver1 -Dorg.kie.server.user=kieserver -Dorg.kie.server.pwd=password -Dorg.kie.server.location=http://localhost:8080/kie-server/services/rest/server -Dorg.kie.server.controller=http://localhost:8080/business-central/rest/controller -Dorg.kie.server.controller.user=kiecontroller -Dorg.kie.server.controller.pwd=password",
"curl http://<KIE_SERVER_USER>:<PASSWORD>@localhost:8080/kie-server/services/rest/server/"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/sso-kie-server-con_integrate-sso |
Chapter 8. ConsoleYAMLSample [console.openshift.io/v1] | Chapter 8. ConsoleYAMLSample [console.openshift.io/v1] Description ConsoleYAMLSample is an extension for customizing OpenShift web console YAML samples. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required metadata spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleYAMLSampleSpec is the desired YAML sample configuration. Samples will appear with their descriptions in a samples sidebar when creating a resources in the web console. 8.1.1. .spec Description ConsoleYAMLSampleSpec is the desired YAML sample configuration. Samples will appear with their descriptions in a samples sidebar when creating a resources in the web console. Type object Required description targetResource title yaml Property Type Description description string description of the YAML sample. snippet boolean snippet indicates that the YAML sample is not the full YAML resource definition, but a fragment that can be inserted into the existing YAML document at the user's cursor. targetResource object targetResource contains apiVersion and kind of the resource YAML sample is representating. title string title of the YAML sample. yaml string yaml is the YAML sample to display. 8.1.2. .spec.targetResource Description targetResource contains apiVersion and kind of the resource YAML sample is representating. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 8.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleyamlsamples DELETE : delete collection of ConsoleYAMLSample GET : list objects of kind ConsoleYAMLSample POST : create a ConsoleYAMLSample /apis/console.openshift.io/v1/consoleyamlsamples/{name} DELETE : delete a ConsoleYAMLSample GET : read the specified ConsoleYAMLSample PATCH : partially update the specified ConsoleYAMLSample PUT : replace the specified ConsoleYAMLSample 8.2.1. /apis/console.openshift.io/v1/consoleyamlsamples Table 8.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ConsoleYAMLSample Table 8.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleYAMLSample Table 8.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSampleList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleYAMLSample Table 8.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.7. Body parameters Parameter Type Description body ConsoleYAMLSample schema Table 8.8. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 201 - Created ConsoleYAMLSample schema 202 - Accepted ConsoleYAMLSample schema 401 - Unauthorized Empty 8.2.2. /apis/console.openshift.io/v1/consoleyamlsamples/{name} Table 8.9. Global path parameters Parameter Type Description name string name of the ConsoleYAMLSample Table 8.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ConsoleYAMLSample Table 8.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.12. Body parameters Parameter Type Description body DeleteOptions schema Table 8.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleYAMLSample Table 8.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.15. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleYAMLSample Table 8.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.17. Body parameters Parameter Type Description body Patch schema Table 8.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleYAMLSample Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.20. Body parameters Parameter Type Description body ConsoleYAMLSample schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleYAMLSample schema 201 - Created ConsoleYAMLSample schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/console_apis/consoleyamlsample-console-openshift-io-v1 |
19.8. LVM Cache for Red Hat Gluster Storage | 19.8. LVM Cache for Red Hat Gluster Storage Important LVM Cache must be used with Red Hat Gluster Storage only on Red Hat Enterprise Linux 7.4 or later. This release includes a number of fixes and enhancements that are critical to a positive experience with caching. 19.8.1. About LVM Cache An LVM Cache logical volume (LV) can be used to improve the performance of a block device by attaching to it a smaller and much faster device to act as a data acceleration layer. When a cache is attached to an LV, the Linux kernel subsystems attempt to keep 'hot' data copies in the fast cache layer at the block level. Additionally, as space in the cache allows, writes are made initially to the cache layer. The results can be better Input/Output (I/O) performance improvements for many workloads. 19.8.1.1. LVM Cache vs. DM-Cache dm-cache refers to the Linux kernel-level device-mapper subsystem that is responsible for all I/O transactions. For most usual operations, the administrator interfaces with the logical volume manager (LVM) as a much simpler abstraction layer above device-mapper. As such, lvmcache is simply part of the LVM system acting as an abstraction layer for the dm-cache subsystem. 19.8.1.2. LVM Cache vs. Gluster Tiered Volumes Red Hat Gluster Storage supports tiered volumes, which are often configured with the same type of fast devices backing the fast tier bricks. The operation of tiering is at the file level and is distributed across the trusted storage pool (TSP). These tiers operate by moving files between the tiers based on tunable algorithms, such that files are migrated between tiers rather than copied. In contrast, LVM Cache operates locally at each block device backing the bricks and does so at the block level. LVM Cache stores copies of the hot data in the fast layer using a non-tunable algorithm (though chunk sizes may be tuned for optimal performance). For most workloads, LVM Cache tends to offer greater performance compared to tiering. However, for certain types of workloads where a large number of clients are consistently accessing the same hot file data set, or where writes can consistently go to the hot tier, tiering may prove more beneficial than LVM Cache. 19.8.1.3. Arbiter Bricks Arbiter bricks operate by storing all file metadata transactions but not data transactions in order to prevent split-brain problems without the overhead of a third data copy. It is important to understand that file metadata is stored with the file, and so arbiter bricks effectively store empty copies of all files. In a distributed system such as Red Hat Gluster Storage, latency can greatly affect the performance of file operations, especially when files are very small and file-based transactions are very high. With such small files, the overhead of the metadata latency can be more impactful to performance than the throughput of the I/O subsystems. Therefore, it is important when creating arbiter bricks that the backing storage devices be as fast as the fastest data storage devices. Therefore, when using LVM Cache to accelerate your data volumes with fast devices, you must allocate the same class of fast devices to serve as your arbiter brick backing devices, otherwise your slow arbiter bricks could negate the performance benefits of your cache-accelerated data bricks. 19.8.1.4. Writethrough vs. Writeback LVM Cache can operate in either writethrough or writeback mode, with writethrough being the default. In writethrough mode, any data written is stored both in the cache layer and in the main data layer. The loss of a device associated with the cache layer in this case would not mean the loss of any data. Writeback mode delays the writing of data blocks from the cache layer to the main data layer. This mode can increase write performance, but the loss of a device associated with the cache layer can result in lost data locally. Note Data resiliency protects from global data loss in the case of a writeback cache device failure under most circumstances, but edge cases could lead to inconsistent data that cannot be automatically healed. 19.8.1.5. Cache-Friendly Workloads While LVM Cache has been demonstrated to improve performance for Red Hat Gluster Storage under many use cases, the relative effects vary based on the workload. The benefits of block-based caching means that LVM Cache can be efficient for even larger file workloads. However, some workloads may see little-to-no benefit from LVM Cache, and highly-random workloads or those with very large working sets may even experience a performance degradation. It is highly recommended that you understand your workload and test accordingly before making a significant investment in hardware to accelerate your storage workload. 19.8.2. Choosing the Size and Speed of Your Cache Devices Sizing a cache appropriately to a workload can be a complicated study, particularly in Red Hat Gluster Storage where the cache is local to the bricks rather than global to the volume. In general, you want to understand the size of your working set as a percentage of your total data set and then size your cache layer with some headroom (10-20%) beyond that working set size to allow for efficient flushes and room to cache new writes. Optimally, the entire working set is kept in the cache, and the overall performance you experience is near that of storing your data directly on the fast devices. When heavily stressed by a working set that is not well-suited for the cache size, you will begin to see a higher percentage of cache misses and your performance will be inconsistent. You may find that as this cache-to-data imbalance increases, a higher percentage of data operations will drop to the speed of the slower data device. From the perspective of a user, this can sometimes be more frustrating than a device that is consistently slow. Understanding and testing your own workload is essential to making an appropriate cache sizing decision. When choosing your cache devices, always consider high-endurance enterprise-class drives. These are typically tuned to either read or write intensive workloads, so be sure to inspect the hardware performance details when making your selection. Pay close attention to latency alongside IOPS or throughput, as the high transaction activity of a cache will benefit significantly from lower-latency hardware. When possible, select NVMe devices that use the PCI bus directly rather than SATA/SAS devices, as this will additionally benefit latency. 19.8.3. Configuring LVM Cache A cache pool is created using logical volume manager (LVM) with fast devices as the physical volumes (PVs). The cache pool is then attached to an existing thin pool (TP) or thick logical volume (LV). Once this is done, block-level caching is immediately enabled for the configured LV, and the dm-cache algorithms will work to keep hot copies of data on the cache pool sub-volume. Warning Adding or removing cache pools can be done on active volumes, even with mounted filesystems in use. However, there is overhead to the operation and performance impacts will be seen, especially when removing a cache volume in writeback mode, as a full data sync will need to occur. As with any changes to the I/O stack, there is risk of data loss. All changes must be made with the requisite caution. In the following example commands, we assume the use of a high-performance NVMe PCI device for caching. These devices typically present with device file paths such as /dev/nvme0n1 . A SATA/SAS device will likely present with a device path such as /dev/sdb . The following example naming has been used: Physical Volume (PV) Name: /dev/nvme0n1 Volume Group (VG) Name: GVG Thin pool name: GTP Logical Volume (LV) name: GLV Note There are several different ways to configure LVM Cache. Following is the most simple approach applicable to most use cases. For details and further command examples, see lvmcache(7) . Create a PV for your fast data device. Add the fast data PV to the VG that hosts the LV you intend to cache. Create the cache pool from your fast data device, reserving space required for metadata during the cache conversion process of your LV. Convert your existing data thin pool LV into a cache LV. 19.8.4. Managing LVM Cache 19.8.4.1. Changing the Mode of an Existing Cache Pool An existing cache LV can be converted between writethrough and writeback modes with the lvchange command. For thin LVs, the command must be run against the tdata subvolume. 19.8.4.2. Checking Your Configuration Use the lsblk command to view the new virtual block device layout. The lvs command displays a number of valuable columns to show the status of your cache pool and volume. For more details, see lvs(8) . Some of the useful columns from the lvs command that can be used to monitor the effectiveness of the cache and to aid in sizing decisions are: CacheTotalBlocks CacheUsedBlocks CacheDirtyBlocks CacheReadHits CacheReadMisses CacheWriteHits CacheWriteMisses You will see a high ratio of Misses to Hits when the cache is cold (freshly attached to the LV). However, with a warm cache (volume online and transacting data for a sufficiently long period of time), high ratios here are indicative of an undersized cache device. 19.8.4.3. Detaching a Cache Pool You can split a cache pool from an LV in one command, leaving the data LV in an un-cached state with all data intact and the cache pool still existing but unattached. In writeback mode this can take a long time to complete while all data is synced. This may also negatively impact performance while it is running. | [
"pvcreate /dev/nvme0n1",
"vgextend GVG /dev/nvme0n1",
"lvcreate --type cache-pool -l 100%FREE -n cpool GVG /dev/nvme0n1",
"lvconvert --type cache --cachepool GVG/cpool GVG/GTP",
"lvchange --cachemode writeback GVG/GTP_tdata",
"lsblk /dev/{sdb,nvme0n1} NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 9.1T 0 disk └─GVG-GTP_tdata_corig 253:9 0 9.1T 0 lvm └─GVG-GTP_tdata 253:3 0 9.1T 0 lvm └─GVG-GTP-tpool 253:4 0 9.1T 0 lvm ├─GVG-GTP 253:5 0 9.1T 0 lvm └─GVG-GLV 253:6 0 9.1T 0 lvm /mnt nvme0n1 259:0 0 745.2G 0 disk ├─GVG-GTP_tmeta 253:2 0 76M 0 lvm │ └─GVG-GTP-tpool 253:4 0 9.1T 0 lvm │ ├─GVG-GTP 253:5 0 9.1T 0 lvm │ └─GVG-GLV 253:6 0 9.1T 0 lvm /mnt ├─GVG-cpool_cdata 253:7 0 701.1G 0 lvm │ └─GVG-GTP_tdata 253:3 0 9.1T 0 lvm │ └─GVG-GTP-tpool 253:4 0 9.1T 0 lvm │ ├─GVG-GTP 253:5 0 9.1T 0 lvm │ └─GVG-GLV 253:6 0 9.1T 0 lvm /mnt ├─GVG-cpool_cmeta 253:8 0 48M 0 lvm │ └─GVG-GTP_tdata 253:3 0 9.1T 0 lvm │ └─GVG-GTP-tpool 253:4 0 9.1T 0 lvm │ ├─GVG-GTP 253:5 0 9.1T 0 lvm │ └─GVG-GLV 253:6 0 9.1T 0 lvm /mnt └─GVG-GTP_tdata_corig 253:9 0 9.1T 0 lvm └─GVG-GTP_tdata 253:3 0 9.1T 0 lvm └─GVG-GTP-tpool 253:4 0 9.1T 0 lvm ├─GVG-GTP 253:5 0 9.1T 0 lvm └─GVG-GLV 253:6 0 9.1T 0 lvm /mnt",
"lvs -a -o name,vg_name,size,pool_lv,devices,cachemode,chunksize LV VG LSize Pool Devices CacheMode Chunk GLV GVG 9.10t GTP 0 GTP GVG <9.12t GTP_tdata(0) 8.00m [GTP_tdata] GVG <9.12t [cpool] GTP_tdata_corig(0) writethrough 736.00k [GTP_tdata_corig] GVG <9.12t /dev/sdb(0) 0 [GTP_tdata_corig] GVG <9.12t /dev/nvme0n1(185076) 0 [GTP_tmeta] GVG 76.00m /dev/nvme0n1(185057) 0 [cpool] GVG <701.10g cpool_cdata(0) writethrough 736.00k [cpool_cdata] GVG <701.10g /dev/nvme0n1(24) 0 [cpool_cmeta] GVG 48.00m /dev/nvme0n1(12) 0 [lvol0_pmspare] GVG 76.00m /dev/nvme0n1(0) 0 [lvol0_pmspare] GVG 76.00m /dev/nvme0n1(185050) 0 root vg_root 50.00g /dev/sda3(4095) 0 swap vg_root <16.00g /dev/sda3(0) 0",
"lvs -a -o devices,cachetotalblocks,cacheusedblocks, cachereadhits,cachereadmisses | egrep 'Devices|cdata' Devices CacheTotalBlocks CacheUsedBlocks CacheReadHits CacheReadMisses cpool_cdata(0) 998850 2581 1 192",
"lvconvert --splitcache GVG/cpool"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-lvm_cache |
10.8.2. The Secure Web Server Virtual Host | 10.8.2. The Secure Web Server Virtual Host By default, the Apache HTTP Server is configured as both a non-secure and a secure server. Both the non-secure and secure servers use the same IP address and hostname, but listen on different ports: 80 and 443 respectively. This enables both non-secure and secure communications to take place simultaneously. One aspect of SSL enhanced HTTP transmissions is that they are more resource intensive than the standard HTTP protocol, so a secure server cannot serve as many pages per second. For this reason, it is often a good idea to minimize the information available from the secure server, especially on a high traffic website. Important Do not use name-based virtual hosts in conjunction with a secure Web server as the SSL handshake occurs before the HTTP request identifies the appropriate name-based virtual host. Name-based virtual hosts only work with the non-secure Web server. The configuration directives for the secure server are contained within virtual host tags in the /etc/httpd/conf.d/ssl.conf file. By default, both the secure and the non-secure Web servers share the same DocumentRoot . It is recommended that a different DocumentRoot be made available for the secure Web server. To stop the non-secure Web server from accepting connections, comment out the line in httpd.conf which reads Listen 80 by placing a hash mark ( # ) at the beginning of the line. When finished, the line looks like the following example: For more information on configuring an SSL enhanced Web server, refer to the chapter titled Apache HTTP Secure Server Configuration in the System Administrators Guide . For advanced configuration tips, refer to the Apache Software Foundation documentation available online at the following URLs: http://httpd.apache.org/docs-2.0/ssl/ http://httpd.apache.org/docs-2.0/vhosts/ | [
"#Listen 80"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-defaultvirtuals |
8.6. Additional Resources | 8.6. Additional Resources For more information on how to manage software packages on Red Hat Enterprise Linux, see the resources listed below. Installed Documentation yum (8) - The manual page for the yum command-line utility provides a complete list of supported options and commands. yumdb (8) - The manual page for the yumdb command-line utility documents how to use this tool to query and, if necessary, alter the yum database. yum.conf (5) - The manual page named yum.conf documents available yum configuration options. yum-utils (1) - The manual page named yum-utils lists and briefly describes additional utilities for managing yum configuration, manipulating repositories, and working with yum database. Online Resources Yum Guides - The Yum Guides page on the project home page provides links to further documentation. Red Hat Access Labs - The Red Hat Access Labs includes a " Yum Repository Configuration Helper " . See Also Chapter 4, Gaining Privileges documents how to gain administrative privileges by using the su and sudo commands. Appendix B, RPM describes the RPM Package Manager ( RPM ), the packaging system used by Red Hat Enterprise Linux. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-yum-additional_resources |
Customizing persistent storage | Customizing persistent storage Red Hat OpenStack Services on OpenShift 18.0 Customizing storage services for Red Hat OpenStack Services on OpenShift OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/customizing_persistent_storage/index |
Chapter 8. Understanding OpenShift Container Platform development | Chapter 8. Understanding OpenShift Container Platform development To fully leverage the capability of containers when developing and running enterprise-quality applications, ensure your environment is supported by tools that allow containers to be: Created as discrete microservices that can be connected to other containerized, and non-containerized, services. For example, you might want to join your application with a database or attach a monitoring application to it. Resilient, so if a server crashes or needs to go down for maintenance or to be decommissioned, containers can start on another machine. Automated to pick up code changes automatically and then start and deploy new versions of themselves. Scaled up, or replicated, to have more instances serving clients as demand increases and then spun down to fewer instances as demand declines. Run in different ways, depending on the type of application. For example, one application might run once a month to produce a report and then exit. Another application might need to run constantly and be highly available to clients. Managed so you can watch the state of your application and react when something goes wrong. Containers' widespread acceptance, and the resulting requirements for tools and methods to make them enterprise-ready, resulted in many options for them. The rest of this section explains options for assets you can create when you build and deploy containerized Kubernetes applications in OpenShift Container Platform. It also describes which approaches you might use for different kinds of applications and development requirements. 8.1. About developing containerized applications You can approach application development with containers in many ways, and different approaches might be more appropriate for different situations. To illustrate some of this variety, the series of approaches that is presented starts with developing a single container and ultimately deploys that container as a mission-critical application for a large enterprise. These approaches show different tools, formats, and methods that you can employ with containerized application development. This topic describes: Building a simple container and storing it in a registry Creating a Kubernetes manifest and saving it to a Git repository Making an Operator to share your application with others 8.2. Building a simple container You have an idea for an application and you want to containerize it. First you require a tool for building a container, like buildah or docker, and a file that describes what goes in your container, which is typically a Dockerfile . , you require a location to push the resulting container image so you can pull it to run anywhere you want it to run. This location is a container registry. Some examples of each of these components are installed by default on most Linux operating systems, except for the Dockerfile, which you provide yourself. The following diagram displays the process of building and pushing an image: Figure 8.1. Create a simple containerized application and push it to a registry If you use a computer that runs Red Hat Enterprise Linux (RHEL) as the operating system, the process of creating a containerized application requires the following steps: Install container build tools: RHEL contains a set of tools that includes podman, buildah, and skopeo that you use to build and manage containers. Create a Dockerfile to combine base image and software: Information about building your container goes into a file that is named Dockerfile . In that file, you identify the base image you build from, the software packages you install, and the software you copy into the container. You also identify parameter values like network ports that you expose outside the container and volumes that you mount inside the container. Put your Dockerfile and the software you want to containerize in a directory on your RHEL system. Run buildah or docker build: Run the buildah build-using-dockerfile or the docker build command to pull your chosen base image to the local system and create a container image that is stored locally. You can also build container images without a Dockerfile by using buildah. Tag and push to a registry: Add a tag to your new container image that identifies the location of the registry in which you want to store and share your container. Then push that image to the registry by running the podman push or docker push command. Pull and run the image: From any system that has a container client tool, such as podman or docker, run a command that identifies your new image. For example, run the podman run <image_name> or docker run <image_name> command. Here <image_name> is the name of your new container image, which resembles quay.io/myrepo/myapp:latest . The registry might require credentials to push and pull images. For more details on the process of building container images, pushing them to registries, and running them, see Custom image builds with Buildah . 8.2.1. Container build tool options Building and managing containers with buildah, podman, and skopeo results in industry standard container images that include features specifically tuned for deploying containers in OpenShift Container Platform or other Kubernetes environments. These tools are daemonless and can run without root privileges, requiring less overhead to run them. Important Support for Docker Container Engine as a container runtime is deprecated in Kubernetes 1.20 and will be removed in a future release. However, Docker-produced images will continue to work in your cluster with all runtimes, including CRI-O. For more information, see the Kubernetes blog announcement . When you ultimately run your containers in OpenShift Container Platform, you use the CRI-O container engine. CRI-O runs on every worker and control plane machine in an OpenShift Container Platform cluster, but CRI-O is not yet supported as a standalone runtime outside of OpenShift Container Platform. 8.2.2. Base image options The base image you choose to build your application on contains a set of software that resembles a Linux system to your application. When you build your own image, your software is placed into that file system and sees that file system as though it were looking at its operating system. Choosing this base image has major impact on how secure, efficient and upgradeable your container is in the future. Red Hat provides a new set of base images referred to as Red Hat Universal Base Images (UBI). These images are based on Red Hat Enterprise Linux and are similar to base images that Red Hat has offered in the past, with one major difference: they are freely redistributable without a Red Hat subscription. As a result, you can build your application on UBI images without having to worry about how they are shared or the need to create different images for different environments. These UBI images have standard, init, and minimal versions. You can also use the Red Hat Software Collections images as a foundation for applications that rely on specific runtime environments such as Node.js, Perl, or Python. Special versions of some of these runtime base images are referred to as Source-to-Image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code. S2I images are available for you to use directly from the OpenShift Container Platform web UI by selecting Catalog Developer Catalog , as shown in the following figure: Figure 8.2. Choose S2I base images for apps that need specific runtimes 8.2.3. Registry options Container registries are where you store container images so you can share them with others and make them available to the platform where they ultimately run. You can select large, public container registries that offer free accounts or a premium version that offer more storage and special features. You can also install your own registry that can be exclusive to your organization or selectively shared with others. To get Red Hat images and certified partner images, you can draw from the Red Hat Registry. The Red Hat Registry is represented by two locations: registry.access.redhat.com , which is unauthenticated and deprecated, and registry.redhat.io , which requires authentication. You can learn about the Red Hat and partner images in the Red Hat Registry from the Container images section of the Red Hat Ecosystem Catalog . Besides listing Red Hat container images, it also shows extensive information about the contents and quality of those images, including health scores that are based on applied security updates. Large, public registries include Docker Hub and Quay.io . The Quay.io registry is owned and managed by Red Hat. Many of the components used in OpenShift Container Platform are stored in Quay.io, including container images and the Operators that are used to deploy OpenShift Container Platform itself. Quay.io also offers the means of storing other types of content, including Helm charts. If you want your own, private container registry, OpenShift Container Platform itself includes a private container registry that is installed with OpenShift Container Platform and runs on its cluster. Red Hat also offers a private version of the Quay.io registry called Red Hat Quay . Red Hat Quay includes geo replication, Git build triggers, Clair image scanning, and many other features. All of the registries mentioned here can require credentials to download images from those registries. Some of those credentials are presented on a cluster-wide basis from OpenShift Container Platform, while other credentials can be assigned to individuals. 8.3. Creating a Kubernetes manifest for OpenShift Container Platform While the container image is the basic building block for a containerized application, more information is required to manage and deploy that application in a Kubernetes environment such as OpenShift Container Platform. The typical steps after you create an image are to: Understand the different resources you work with in Kubernetes manifests Make some decisions about what kind of an application you are running Gather supporting components Create a manifest and store that manifest in a Git repository so you can store it in a source versioning system, audit it, track it, promote and deploy it to the environment, roll it back to earlier versions, if necessary, and share it with others 8.3.1. About Kubernetes pods and services While the container image is the basic unit with docker, the basic units that Kubernetes works with are called pods . Pods represent the step in building out an application. A pod can contain one or more than one container. The key is that the pod is the single unit that you deploy, scale, and manage. Scalability and namespaces are probably the main items to consider when determining what goes in a pod. For ease of deployment, you might want to deploy a container in a pod and include its own logging and monitoring container in the pod. Later, when you run the pod and need to scale up an additional instance, those other containers are scaled up with it. For namespaces, containers in a pod share the same network interfaces, shared storage volumes, and resource limitations, such as memory and CPU, which makes it easier to manage the contents of the pod as a single unit. Containers in a pod can also communicate with each other by using standard inter-process communications, such as System V semaphores or POSIX shared memory. While individual pods represent a scalable unit in Kubernetes, a service provides a means of grouping together a set of pods to create a complete, stable application that can complete tasks such as load balancing. A service is also more permanent than a pod because the service remains available from the same IP address until you delete it. When the service is in use, it is requested by name and the OpenShift Container Platform cluster resolves that name into the IP addresses and ports where you can reach the pods that compose the service. By their nature, containerized applications are separated from the operating systems where they run and, by extension, their users. Part of your Kubernetes manifest describes how to expose the application to internal and external networks by defining network policies that allow fine-grained control over communication with your containerized applications. To connect incoming requests for HTTP, HTTPS, and other services from outside your cluster to services inside your cluster, you can use an Ingress resource. If your container requires on-disk storage instead of database storage, which might be provided through a service, you can add volumes to your manifests to make that storage available to your pods. You can configure the manifests to create persistent volumes (PVs) or dynamically create volumes that are added to your Pod definitions. After you define a group of pods that compose your application, you can define those pods in Deployment and DeploymentConfig objects. 8.3.2. Application types , consider how your application type influences how to run it. Kubernetes defines different types of workloads that are appropriate for different kinds of applications. To determine the appropriate workload for your application, consider if the application is: Meant to run to completion and be done. An example is an application that starts up to produce a report and exits when the report is complete. The application might not run again then for a month. Suitable OpenShift Container Platform objects for these types of applications include Job and CronJob objects. Expected to run continuously. For long-running applications, you can write a deployment . Required to be highly available. If your application requires high availability, then you want to size your deployment to have more than one instance. A Deployment or DeploymentConfig object can incorporate a replica set for that type of application. With replica sets, pods run across multiple nodes to make sure the application is always available, even if a worker goes down. Need to run on every node. Some types of Kubernetes applications are intended to run in the cluster itself on every master or worker node. DNS and monitoring applications are examples of applications that need to run continuously on every node. You can run this type of application as a daemon set . You can also run a daemon set on a subset of nodes, based on node labels. Require life-cycle management. When you want to hand off your application so that others can use it, consider creating an Operator . Operators let you build in intelligence, so it can handle things like backups and upgrades automatically. Coupled with the Operator Lifecycle Manager (OLM), cluster managers can expose Operators to selected namespaces so that users in the cluster can run them. Have identity or numbering requirements. An application might have identity requirements or numbering requirements. For example, you might be required to run exactly three instances of the application and to name the instances 0 , 1 , and 2 . A stateful set is suitable for this application. Stateful sets are most useful for applications that require independent storage, such as databases and zookeeper clusters. 8.3.3. Available supporting components The application you write might need supporting components, like a database or a logging component. To fulfill that need, you might be able to obtain the required component from the following Catalogs that are available in the OpenShift Container Platform web console: OperatorHub, which is available in each OpenShift Container Platform 4.14 cluster. The OperatorHub makes Operators available from Red Hat, certified Red Hat partners, and community members to the cluster operator. The cluster operator can make those Operators available in all or selected namespaces in the cluster, so developers can launch them and configure them with their applications. Templates, which are useful for a one-off type of application, where the lifecycle of a component is not important after it is installed. A template provides an easy way to get started developing a Kubernetes application with minimal overhead. A template can be a list of resource definitions, which could be Deployment , Service , Route , or other objects. If you want to change names or resources, you can set these values as parameters in the template. You can configure the supporting Operators and templates to the specific needs of your development team and then make them available in the namespaces in which your developers work. Many people add shared templates to the openshift namespace because it is accessible from all other namespaces. 8.3.4. Applying the manifest Kubernetes manifests let you create a more complete picture of the components that make up your Kubernetes applications. You write these manifests as YAML files and deploy them by applying them to the cluster, for example, by running the oc apply command. 8.3.5. steps At this point, consider ways to automate your container development process. Ideally, you have some sort of CI pipeline that builds the images and pushes them to a registry. In particular, a GitOps pipeline integrates your container development with the Git repositories that you use to store the software that is required to build your applications. The workflow to this point might look like: Day 1: You write some YAML. You then run the oc apply command to apply that YAML to the cluster and test that it works. Day 2: You put your YAML container configuration file into your own Git repository. From there, people who want to install that app, or help you improve it, can pull down the YAML and apply it to their cluster to run the app. Day 3: Consider writing an Operator for your application. 8.4. Develop for Operators Packaging and deploying your application as an Operator might be preferred if you make your application available for others to run. As noted earlier, Operators add a lifecycle component to your application that acknowledges that the job of running an application is not complete as soon as it is installed. When you create an application as an Operator, you can build in your own knowledge of how to run and maintain the application. You can build in features for upgrading the application, backing it up, scaling it, or keeping track of its state. If you configure the application correctly, maintenance tasks, like updating the Operator, can happen automatically and invisibly to the Operator's users. An example of a useful Operator is one that is set up to automatically back up data at particular times. Having an Operator manage an application's backup at set times can save a system administrator from remembering to do it. Any application maintenance that has traditionally been completed manually, like backing up data or rotating certificates, can be completed automatically with an Operator. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/architecture/understanding-development |
2. Related Documentation | 2. Related Documentation For more information about using Red Hat Enterprise Linux, see the following resources: Installation Guide - Documents relevant information regarding the installation of Red Hat Enterprise Linux 6. Deployment Guide - Documents relevant information regarding the deployment, configuration and administration of Red Hat Enterprise Linux 6. Storage Administration Guide - Provides instructions on how to effectively manage storage devices and file systems on Red Hat Enterprise Linux 6. For more information about the High Availability Add-On and the Resilient Storage Add-On for Red Hat Enterprise Linux 6, see the following resources: High Availability Add-On Overview - Provides a high-level overview of the Red Hat High Availability Add-On. Cluster Administration - Provides information about installing, configuring and managing the High Availability Add-On. Logical Volume Manager Administration - Provides a description of the Logical Volume Manager (LVM), including information on running LVM in a clustered environment. DM Multipath - Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux. Load Balancer Administration - Provides information on configuring high-performance systems and services with the Load Balancer Add-On, a set of integrated software components that provide Linux Virtual Servers (LVS) for balancing IP load across a set of real servers. Release Notes - Provides information about the current release of Red Hat products. Red Hat documents are available in HTML, PDF, and RPM versions on the Red Hat Enterprise Linux Documentation CD and online at https://access.redhat.com/site/documentation/ . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/related_documentation-gfs2 |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/providing-feedback-on-red-hat-documentation_managing-hosts |
Chapter 2. Using the Software Development Kit | Chapter 2. Using the Software Development Kit This section describes how to use the software development kit for Version 4. 2.1. Packages The following modules are most frequently used by the Python SDK: ovirtsdk4 This is the top level module. It most important element is the Connection class, which is the mechanism to connect to the server and to obtain the reference to the root of the services tree. The Error class is the base exception class that the SDK will raise when it needs to report an error. For certain kinds of errors, there are specific error classes, which extend the base error class: AuthError - Raised when authentication or authorization fails. ConnectionError - Raised when the name of the server cannot be resolved or the server is unreachable. NotFoundError - Raised when the requested object does not exist. TimeoutError - Raised when an operation times out. ovirtsdk4.types This module contains the classes that implement the types used in the API. For example, the ovirtsdk4.types.Vm class is the implementation of the virtual machine type. These classes are data containers and do not contain any logic. Instances of these classes are used as parameters and return values of service methods. The conversion to or from the underlying representation is handled transparently by the SDK. ovirtsdk4.services This module contains the classes that implement the services supported by the API. For example, the ovirtsdk4.services.VmsService class is the implementation of the service that manages the collection of virtual machines of the system. Instances of these classes are automatically created by the SDK when a service is located. For example, a new instance of the VmsService class is automatically created by the SDK when doing the following: vms_service = connection.system_service().vms_service() It is best to avoid creating instances of these classes manually, as the parameters of the constructors and, in general, all the methods except the service locators and service methods, may change in the future. There are other modules, like ovirtsdk4.http , ovirtsdk4.readers , and ovirtsdk4.writers . These are used to implement the HTTP communication and for XML parsing and rendering. Avoid using them, because they are internal implementation details that may change in the future; backwards compatibility is not guaranteed. 2.2. Connecting to the Server To connect to the server, import the ovirtsdk4 module, which contains the Connection class. This is the entry point of the SDK, and provides access to the root of the tree of services of the API: import ovirtsdk4 as sdk connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', ) The connection holds critical resources, including a pool of HTTP connections to the server and an authentication token. It is very important to free these resources when they are no longer in use: connection.close() Once a connection is closed, it cannot be reused. The ca.pem file is required when connecting to a server protected with TLS. In a normal installation, it is located in /etc/pki/ovirt-engine/ on the Manager machine. If you do not specify the ca_file , the system-wide CA certificate store will be used. For more information on obtaining the ca.pem file, see the REST API Guide . If the connection is not successful, the SDK will raise an ovirtsdk4.Error exception containing the details. 2.3. Using Types The classes in the ovirtsdk4.types module are pure data containers. They do not have any logic or operations. Instances of types can be created and modified at will. Creating or modifying an instance does not affect the server side, unless the change is explicitly passed with a call to one of the service methods described below. Changes on the server side are not automatically reflected in the instances that already exist in memory. The constructors of these classes have multiple optional arguments, one for each attribute of the type. This is intended to simplify creation of objects using nested calls to multiple constructors. This example creates an instance of a virtual machine, specifying its cluster name, template, and memory, in bytes: from ovirtsdk4 import types vm = types.Vm( name='vm1', cluster=types.Cluster( name='Default' ), template=types.Template( name='mytemplate' ), memory=1073741824 ) Using the constructors in this way is recommended, but not mandatory. You can also create the instance with no arguments in the call to the constructor and populate the object step by step, using the setters, or by using a mix of both approaches: vm = types.Vm() vm.name = 'vm1' vm.cluster = types.Cluster(name='Default') vm.template = types.Template(name='mytemplate') vm.memory=1073741824 Attributes that are defined as lists of objects in the specification of the API are implemented as Python lists. For example, the custom_properties attributes of the Vm type are defined as a list of objects of type CustomProperty . When the attributes are used in the SDK, they are a Python list: vm = types.Vm( name='vm1', custom_properties=[ types.CustomProperty(...), types.CustomProperty(...), ... ] ) Attributes that are defined as enumerated values in API are implemented as enum in Python, using the native support for enums in Python 3 and the enum34 package in Python 2.7. In this example, the status attribute of the Vm type is defined using the VmStatus enum : if vm.status == types.VmStatus.DOWN: ... elif vm.status == types.VmStatus.IMAGE_LOCKED: .... Note In the API specification, the values of enum types appear in lower case, because that is what is used for XML and JSON. The Python convention, however, is to capitalize enum values. Reading the attributes of instances of types is done using the corresponding properties: print("vm.name: %s" % vm.name) print("vm.memory: %s" % vm.memory) for custom_property in vm.custom_properties: ... 2.4. Using Links Some attributes of types are defined by the API as links. This convention indicates that the values are not normally populated when retrieving the representation of that object. Rather, a link is returned instead. For example, when retrieving a virtual machine, the XML response from the server includes the <link> attribute: The link to vm.diskattachments does not contain the actual disk attachments. To obtain the data, the Connection class provides a follow_link method that uses the value of the href XML attribute to retrieve the actual data. For example, to retrieve the details of the disks of the virtual machine, you follow the link to the disk attachments, and then to each of the disks: # Retrieve the virtual machine: vm = vm_service.get() # Follow the link to the disk attachments, and then to the disks: attachments = connection.follow_link(vm.disk_attachments) for attachment in attachments: disk = connection.follow_link(attachment.disk) print("disk.alias: " % disk.alias) 2.5. Locating Services The API provides a set of services, each associated with a path within the URL space of the server. For example, the service that manages the collection of virtual machines of the system is located in /vms , and the service that manages the virtual machine with identifier 123 is located in /vms/123 . In the SDK, the root of that tree of services is implemented by the system service. It is obtained calling the system_service method of the connection: system_service = connection.system_service() When you have the reference to this system service, you can use it to obtain references to other services, calling the *_service methods, called service locators, of the service. For example, to obtain a reference to the service that manages the collection of virtual machines of the system, you use the vms_service service locator: vms_service = system_service.vms_service() To obtain a reference to the service that manages the virtual machine with identifier 123 , you use the vm_service service locator of the service that manages the collection of virtual machines. It uses the identifier of the virtual machine as a parameter: vm_service = vms_service.vm_service('123') Important Calling service locators does not send a request to the server. The Python objects that they return are pure services, which do not contain any data. For example, the vm_service Python object called in this example is not the representation of a virtual machine. It is the service that is used to retrieve, update, delete, start and stop that virtual machine. 2.6. Using Services After you have located a service, you can call its service methods, which send requests to the server and do the real work. Services that manage a single object usually support the get , update , and remove methods. Services that manage collections of objects usually support the list and add methods. Both kinds of services, especially services that manage a single object, can support additional action methods. 2.6.1. Using get Methods These service methods are used to retrieve the representation of a single object. The following example retrieves the representation of the virtual machine with identifier 123 : # Find the service that manages the virtual machine: vms_service = system_service.vms_service() vm_service = vms_service.vm_service('123') # Retrieve the representation of the virtual machine: vm = vm_service.get() The response is an instance of the corresponding type, in this case an instance of the Python class ovirtsdk4.types.Vm . The get methods of some services support additional parameters that control how to retrieve the representation of the object or what representation to retrieve if there is more than one. For example, you may want to retrieve either the current state of a virtual machine or its state the time it is started, as they may be different. The get method of the service that manages a virtual machine supports a next_run Boolean parameter: # Retrieve the representation of the virtual machine, not the # current one, but the one that will be used after the # boot: vm = vm_service.get(next_run=True) See the reference documentation of the SDK for details. If the object cannot be retrieved for any reason, the SDK raises an ovirtsdk4.Error exception, with details of the failure. This includes the situation when the object does not actually exist. Note that the exception is raised when calling the get service method. The call to the service locator method never fails, even if the object does not exist, because that call does not send a request to the server. For example: # Call the service that manages a non-existent virtual machine. # This call will succeed. vm_service = vms_service.vm_service('junk') # Retrieve the virtual machine. This call will raise an exception. vm = vm_service.get() 2.6.2. Using list Methods These service methods retrieve the representations of the objects of a collection. This example retrieves the complete collection of virtual machines of the system: # Find the service that manages the collection of virtual # machines: vms_service = system_service.vms_service() # List the virtual machines in the collection vms = vms_service.list() The result will be a Python list containing the instances of corresponding types. For example, in this case, the result will be a list of instances of the class ovirtsdk4.types.Vm . The list methods of some services support additional parameters. For example, almost all top-level collections support a search parameter to filter the results or a max parameter to limit the number of results returned by the server. This example retrieves the names of virtual machines starting with my , with an upper limit of 10 results: vms = vms_service.list(search='name=my*', max=10) Note Not all list methods support these parameters. Some list methods support other parameters. See the reference documentation of the SDK for details. If a list of returned results is empty for any reason, the returned value will be an empty list. It will never be None . If there is an error while trying to retrieve the result, the SDK will raise an ovirtsdk4.Error exception containing the details of the failure. 2.6.3. Using add Methods These service methods add new elements to a collection. They receive an instance of the relevant type describing the object to add, send the request to add it, and return an instance of the type describing the added object. This example adds a new virtual machine called vm1 : from ovirtsdk4 import types # Add the virtual machine: vm = vms_service.add( vm=types.Vm( name='vm1', cluster=types.Cluster( name='Default' ), template=types.Template( name='mytemplate' ) ) ) If the object cannot be created for any reason, the SDK will raise an ovirtsdk4.Error exception containing the details of the failure. It will never return None . Important The Python object returned by this add method is an instance of the relevant type. It is not a service but a container of data. In this particular example, the returned object is an instance of the ovirtsdk4.types.Vm class. If, after creating the virtual machine, you need to perform an operation such as retrieving or starting it, you will first need to find the service that manages it, and call the corresponding service locator: # Add the virtual machine: vm = vms_service.add( ... ) # Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Start the virtual machine vm_service.start() Objects are created asynchronously. When you create a new virtual machine, the add method will return a response before the virtual machine is completely created and ready to be used. It is good practice to poll the status of the object to ensure that it is completely created. For a virtual machine, you should check until its status is DOWN : # Add the virtual machine: vm = vms_service.add( ... ) # Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Wait until the virtual machine is down, indicating that it is # completely created: while True: time.sleep(5) vm = vm_service.get() if vm.status == types.VmStatus.DOWN: break Using a loop to retrieve the object status, with the get method, ensures that the status attribute is updated. 2.6.4. Using update Methods These service methods update existing objects. They receive an instance of the relevant type describing the update to perform, send the request to update it, and return an instance of the type describing the updated object. This example updates the name of a virtual machine from vm1 to newvm : from ovirtsdk4 import types # Find the virtual machine, and then the service that # manages it: vm = vms_service.list(search='name=vm1')[0] vm_service = vm_service.vm_service(vm.id) # Update the name: updated_vm = vm_service.update( vm=types.Vm( name='newvm' ) ) When performing updates, avoid sending the complete representation of the object. Send only the attributes that you want to update. Do not do this: # Retrieve the complete representation: vm = vm_service.get() # Update the representation, in memory, without sending a request # to the server: vm.name = 'newvm' # Send the update. Do *not* do this. vms_service.update(vm) Sending the complete representation causes two problems: You are sending much more information than the server needs, thus wasting resources. The server will try to update all the attributes of the object, even those that you did not intend to change. This may cause bugs on the server side. The update methods of some services support additional parameters that control how or what to update. For example, you may want to update either the current state of a virtual machine or the state that will be used the time the virtual machine is started. The update method of the service that manages a virtual machine supports a next_run Boolean parameter: # Update the memory of the virtual machine to 1 GiB, # not during the current run, but after boot: vm = vm_service.update( vm=types.Vm( memory=1073741824 ), next_run=True ) If the update cannot be performed for any reason, the SDK will raise an ovirtsdk4.Error exception containing the details of the failure. It will never return None . The Python object returned by this update method is an instance of the relevant type. It is not a service, but a container for data. In this particular example, the returned object will be an instance of the ovirtsdk4.types.Vm class. 2.6.5. Using remove Methods These service methods remove existing objects. They usually do not take parameters, because they are methods of services that manage single objects. Therefore, the service already knows what object to remove. This example removes the virtual machine with identifier 123 : # Find the virtual machine by name: vm = vms_service.list(search='name=123')[0] # Find the service that manages the virtual machine using the ID: vm_service = vms_service.vm_service(vm.id) # Remove the virtual machine: vm_service.remove() The remove methods of some services support additional parameters that control how or what to remove. For example, it is possible to remove a virtual machine while preserving its disks, using the detach_only Boolean parameter: # Remove the virtual machine while preserving the disks: vm_service.remove(detach_only=True) The remove method returns None if the object is removed successfully. It does not return the removed object. If the object cannot be removed for any reason, the SDK raises an ovirtsdk4.Error exception containing the details of the failure. 2.6.6. Using Other Action Methods There are other service methods that perform miscellaneous operations, such as stopping and starting a virtual machine: # Start the virtual machine: vm_service.start() Many of these methods include parameters that modify the operation. For example, the method that starts a virtual machine supports a use_cloud_init parameter, if you want to start it using cloud-init : # Start the virtual machine: vm_service.start(cloud_init=True) Most action methods return None when they succeed and raise an ovirtsdk4.Error when they fail. A few action methods return values. For example, the service that manages a storage domain has an is_attached action method that checks whether the storage domain is already attached to a data center and returns a Boolean value: # Check if the storage domain is attached to a data center: sds_service = system_service.storage_domains_service() sd_service = sds_service.storage_domain_service('123') if sd_service.is_attached(): ... Check the reference documentation of the SDK to see the action methods supported by each service, the parameters that they take, and the values that they return. 2.7. Additional Resources For detailed information and examples, see the following resources: V3 REST API Guide V4 REST API Guide Python SDK reference documentation Python SDK examples Generating Modules You can generate documentation using pydoc for the following modules: ovirtsdk.api ovirtsdk.infrastructure.brokers ovirtsdk.infrastructure.errors The documentation is provided by the ovirt-engine-sdk-python package. Run the following command on the Manager machine to view the latest version of these documents: | [
"vms_service = connection.system_service().vms_service()",
"import ovirtsdk4 as sdk connection = sdk.Connection( url='https://engine.example.com/ovirt-engine/api', username='admin@internal', password='password', ca_file='ca.pem', )",
"connection.close()",
"from ovirtsdk4 import types vm = types.Vm( name='vm1', cluster=types.Cluster( name='Default' ), template=types.Template( name='mytemplate' ), memory=1073741824 )",
"vm = types.Vm() vm.name = 'vm1' vm.cluster = types.Cluster(name='Default') vm.template = types.Template(name='mytemplate') vm.memory=1073741824",
"vm = types.Vm( name='vm1', custom_properties=[ types.CustomProperty(...), types.CustomProperty(...), ] )",
"if vm.status == types.VmStatus.DOWN: elif vm.status == types.VmStatus.IMAGE_LOCKED: .",
"print(\"vm.name: %s\" % vm.name) print(\"vm.memory: %s\" % vm.memory) for custom_property in vm.custom_properties:",
"<vm id=\"123\" href=\"/ovirt-engine/api/vms/123\"> <name>vm1</name> <link rel=\"diskattachments\" href=\"/ovirt-engine/api/vms/123/diskattachments/> </vm>",
"Retrieve the virtual machine: vm = vm_service.get() Follow the link to the disk attachments, and then to the disks: attachments = connection.follow_link(vm.disk_attachments) for attachment in attachments: disk = connection.follow_link(attachment.disk) print(\"disk.alias: \" % disk.alias)",
"system_service = connection.system_service()",
"vms_service = system_service.vms_service()",
"vm_service = vms_service.vm_service('123')",
"Find the service that manages the virtual machine: vms_service = system_service.vms_service() vm_service = vms_service.vm_service('123') Retrieve the representation of the virtual machine: vm = vm_service.get()",
"Retrieve the representation of the virtual machine, not the current one, but the one that will be used after the next boot: vm = vm_service.get(next_run=True)",
"Call the service that manages a non-existent virtual machine. This call will succeed. vm_service = vms_service.vm_service('junk') Retrieve the virtual machine. This call will raise an exception. vm = vm_service.get()",
"Find the service that manages the collection of virtual machines: vms_service = system_service.vms_service() List the virtual machines in the collection vms = vms_service.list()",
"vms = vms_service.list(search='name=my*', max=10)",
"from ovirtsdk4 import types Add the virtual machine: vm = vms_service.add( vm=types.Vm( name='vm1', cluster=types.Cluster( name='Default' ), template=types.Template( name='mytemplate' ) ) )",
"Add the virtual machine: vm = vms_service.add( ) Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Start the virtual machine vm_service.start()",
"Add the virtual machine: vm = vms_service.add( ) Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Wait until the virtual machine is down, indicating that it is completely created: while True: time.sleep(5) vm = vm_service.get() if vm.status == types.VmStatus.DOWN: break",
"from ovirtsdk4 import types Find the virtual machine, and then the service that manages it: vm = vms_service.list(search='name=vm1')[0] vm_service = vm_service.vm_service(vm.id) Update the name: updated_vm = vm_service.update( vm=types.Vm( name='newvm' ) )",
"Retrieve the complete representation: vm = vm_service.get() Update the representation, in memory, without sending a request to the server: vm.name = 'newvm' Send the update. Do *not* do this. vms_service.update(vm)",
"Update the memory of the virtual machine to 1 GiB, not during the current run, but after next boot: vm = vm_service.update( vm=types.Vm( memory=1073741824 ), next_run=True )",
"Find the virtual machine by name: vm = vms_service.list(search='name=123')[0] Find the service that manages the virtual machine using the ID: vm_service = vms_service.vm_service(vm.id) Remove the virtual machine: vm_service.remove()",
"Remove the virtual machine while preserving the disks: vm_service.remove(detach_only=True)",
"Start the virtual machine: vm_service.start()",
"Start the virtual machine: vm_service.start(cloud_init=True)",
"Check if the storage domain is attached to a data center: sds_service = system_service.storage_domains_service() sd_service = sds_service.storage_domain_service('123') if sd_service.is_attached():",
"pydoc [MODULE]"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/python_sdk_guide/chap-using_the_software_development_kit |
Chapter 10. Notifications overview | Chapter 10. Notifications overview Quay.io supports adding notifications to a repository for various events that occur in the repository's lifecycle. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/about_quay_io/repository-notifications |
Chapter 7. Installing a cluster on GCP into an existing VPC | Chapter 7. Installing a cluster on GCP into an existing VPC In OpenShift Container Platform version 4.14, you can install a cluster into an existing Virtual Private Cloud (VPC) on Google Cloud Platform (GCP). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 7.2. About using a custom VPC In OpenShift Container Platform 4.14, you can deploy a cluster into existing subnets in an existing Virtual Private Cloud (VPC) in Google Cloud Platform (GCP). By deploying OpenShift Container Platform into an existing GCP VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. You must configure networking for the subnets. 7.2.1. Requirements for using your VPC The union of the VPC CIDR block and the machine network CIDR must be non-empty. The subnets must be within the machine network. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 7.2.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide one subnet for control-plane machines and one subnet for compute machines. The subnet's CIDRs belong to the machine CIDR that you specified. 7.2.3. Division of permissions Some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. 7.2.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 7.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 7.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.6.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 7.1. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 7.6.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 7.2. Machine series for 64-bit ARM machines Tau T2A 7.6.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 7.6.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 7.6.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 7.6.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{"auths": ...}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 1 15 17 18 24 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 19 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 20 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 16 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 21 Specify the name of an existing VPC. 22 Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. 23 Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. 25 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 26 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources Enabling customer-managed encryption keys for a compute machine set 7.6.8. Create an Ingress Controller with global access on GCP You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers. Prerequisites You created the install-config.yaml and complete any modifications to it. Procedure Create an Ingress Controller with global access on a new GCP cluster. Change to the directory that contains the installation program and create a manifest file: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: Sample clientAccess configuration to Global apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService 1 Set gcp.clientAccess to Global . 2 Global access is only available to Ingress Controllers using internal load balancers. 7.6.9. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 7.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 7.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 7.8.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 7.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 7.8.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 7.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 7.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 7.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 7.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{\"auths\": ...}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1",
"ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml",
"cluster-ingress-default-ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_gcp/installing-gcp-vpc |
Chapter 5. Configuring resources for managed components on OpenShift Container Platform | Chapter 5. Configuring resources for managed components on OpenShift Container Platform You can manually adjust the resources on Red Hat Quay on OpenShift Container Platform for the following components that have running pods: quay clair mirroring clairpostgres postgres This feature allows users to run smaller test clusters, or to request more resources upfront in order to avoid partially degraded Quay pods. Limitations and requests can be set in accordance with Kubernetes resource units . The following components should not be set lower than their minimum requirements. This can cause issues with your deployment and, in some cases, result in failure of the pod's deployment. quay : Minimum of 6 GB, 2vCPUs clair : Recommended of 2 GB memory, 2 vCPUs clairpostgres : Minimum of 200 MB You can configure resource requests on the OpenShift Container Platform UI, or by directly by updating the QuayRegistry YAML. Important The default values set for these components are the suggested values. Setting resource requests too high or too low might lead to inefficient resource utilization, or performance degradation, respectively. 5.1. Configuring resource requests by using the OpenShift Container Platform UI Use the following procedure to configure resources by using the OpenShift Container Platform UI. Procedure On the OpenShift Container Platform developer console, click Operators Installed Operators Red Hat Quay . Click QuayRegistry . Click the name of your registry, for example, example-registry . Click YAML . In the spec.components field, you can override the resource of the quay , clair , mirroring clairpostgres , and postgres resources by setting values for the .overrides.resources.limits and the overrides.resources.requests fields. For example: spec: components: - kind: clair managed: true overrides: resources: limits: cpu: "5" # Limiting to 5 CPU (equivalent to 5000m or 5000 millicpu) memory: "18Gi" # Limiting to 18 Gibibytes of memory requests: cpu: "4" # Requesting 4 CPU memory: "4Gi" # Requesting 4 Gibibytes of memory - kind: postgres managed: true overrides: resources: limits: {} 1 requests: cpu: "700m" # Requesting 700 millicpu or 0.7 CPU memory: "4Gi" # Requesting 4 Gibibytes of memory - kind: mirror managed: true overrides: resources: limits: 2 requests: cpu: "800m" # Requesting 800 millicpu or 0.8 CPU memory: "1Gi" # Requesting 1 Gibibyte of memory - kind: quay managed: true overrides: resources: limits: cpu: "4" # Limiting to 4 CPU memory: "10Gi" # Limiting to 10 Gibibytes of memory requests: cpu: "4" # Requesting 4 CPU memory: "10Gi" # Requesting 10 Gibi of memory - kind: clairpostgres managed: true overrides: resources: limits: cpu: "800m" # Limiting to 800 millicpu or 0.8 CPU memory: "3Gi" # Limiting to 3 Gibibytes of memory requests: {} 1 Setting the limits or requests fields to {} uses the default values for these resources. 2 Leaving the limits or requests field empty puts no limitations on these resources. 5.2. Configuring resource requests by editing the QuayRegistry YAML You can re-configure Red Hat Quay to configure resource requests after you have already deployed a registry. This can be done by editing the QuayRegistry YAML file directly and then re-deploying the registry. Procedure Optional: If you do not have a local copy of the QuayRegistry YAML file, enter the following command to obtain it: USD oc get quayregistry <registry_name> -n <namespace> -o yaml > quayregistry.yaml Open the quayregistry.yaml created from Step 1 of this procedure and make the desired changes. For example: - kind: quay managed: true overrides: resources: limits: {} requests: cpu: "0.7" # Requesting 0.7 CPU (equivalent to 500m or 500 millicpu) memory: "512Mi" # Requesting 512 Mebibytes of memory Save the changes. Apply the Red Hat Quay registry using the updated configurations by running the following command: USD oc replace -f quayregistry.yaml Example output quayregistry.quay.redhat.com/example-registry replaced | [
"spec: components: - kind: clair managed: true overrides: resources: limits: cpu: \"5\" # Limiting to 5 CPU (equivalent to 5000m or 5000 millicpu) memory: \"18Gi\" # Limiting to 18 Gibibytes of memory requests: cpu: \"4\" # Requesting 4 CPU memory: \"4Gi\" # Requesting 4 Gibibytes of memory - kind: postgres managed: true overrides: resources: limits: {} 1 requests: cpu: \"700m\" # Requesting 700 millicpu or 0.7 CPU memory: \"4Gi\" # Requesting 4 Gibibytes of memory - kind: mirror managed: true overrides: resources: limits: 2 requests: cpu: \"800m\" # Requesting 800 millicpu or 0.8 CPU memory: \"1Gi\" # Requesting 1 Gibibyte of memory - kind: quay managed: true overrides: resources: limits: cpu: \"4\" # Limiting to 4 CPU memory: \"10Gi\" # Limiting to 10 Gibibytes of memory requests: cpu: \"4\" # Requesting 4 CPU memory: \"10Gi\" # Requesting 10 Gibi of memory - kind: clairpostgres managed: true overrides: resources: limits: cpu: \"800m\" # Limiting to 800 millicpu or 0.8 CPU memory: \"3Gi\" # Limiting to 3 Gibibytes of memory requests: {}",
"oc get quayregistry <registry_name> -n <namespace> -o yaml > quayregistry.yaml",
"- kind: quay managed: true overrides: resources: limits: {} requests: cpu: \"0.7\" # Requesting 0.7 CPU (equivalent to 500m or 500 millicpu) memory: \"512Mi\" # Requesting 512 Mebibytes of memory",
"oc replace -f quayregistry.yaml",
"quayregistry.quay.redhat.com/example-registry replaced"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/configuring-resources-managed-components |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.