title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 12. Managing control plane machines
Chapter 12. Managing control plane machines 12.1. About control plane machine sets With control plane machine sets, you can automate management of the control plane machine resources within your OpenShift Container Platform cluster. Important Control plane machine sets cannot manage compute machines, and compute machine sets cannot manage control plane machines. Control plane machine sets provide for control plane machines similar management capabilities as compute machine sets provide for compute machines. However, these two types of machine sets are separate custom resources defined within the Machine API and have several fundamental differences in their architecture and functionality. 12.1.1. Control Plane Machine Set Operator overview The Control Plane Machine Set Operator uses the ControlPlaneMachineSet custom resource (CR) to automate management of the control plane machine resources within your OpenShift Container Platform cluster. When the state of the cluster control plane machine set is set to Active , the Operator ensures that the cluster has the correct number of control plane machines with the specified configuration. This allows the automated replacement of degraded control plane machines and rollout of changes to the control plane. A cluster has only one control plane machine set, and the Operator only manages objects in the openshift-machine-api namespace. 12.1.1.1. Control Plane Machine Set Operator limitations The Control Plane Machine Set Operator has the following limitations: Only Amazon Web Services (AWS), Google Cloud Platform (GCP), IBM Power(R) Virtual Server, Microsoft Azure, Nutanix, VMware vSphere, and Red Hat OpenStack Platform (RHOSP) clusters are supported. Clusters that do not have preexisting machines that represent the control plane nodes cannot use a control plane machine set or enable the use of a control plane machine set after installation. Generally, preexisting control plane machines are only present if a cluster was installed using infrastructure provisioned by the installation program. To determine if a cluster has the required preexisting control plane machines, run the following command as a user with administrator privileges: USD oc get machine \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machine-role=master Example output showing preexisting control plane machines NAME PHASE TYPE REGION ZONE AGE <infrastructure_id>-master-0 Running m6i.xlarge us-west-1 us-west-1a 5h19m <infrastructure_id>-master-1 Running m6i.xlarge us-west-1 us-west-1b 5h19m <infrastructure_id>-master-2 Running m6i.xlarge us-west-1 us-west-1a 5h19m Example output missing preexisting control plane machines No resources found in openshift-machine-api namespace. The Operator requires the Machine API Operator to be operational and is therefore not supported on clusters with manually provisioned machines. When installing a OpenShift Container Platform cluster with manually provisioned machines for a platform that creates an active generated ControlPlaneMachineSet custom resource (CR), you must remove the Kubernetes manifest files that define the control plane machine set as instructed in the installation process. Only clusters with three control plane machines are supported. Horizontal scaling of the control plane is not supported. Deploying Azure control plane machines on Ephemeral OS disks increases risk for data loss and is not supported. Deploying control plane machines as AWS Spot Instances, GCP preemptible VMs, or Azure Spot VMs is not supported. Important Attempting to deploy control plane machines as AWS Spot Instances, GCP preemptible VMs, or Azure Spot VMs might cause the cluster to lose etcd quorum. A cluster that loses all control plane machines simultaneously is unrecoverable. Making changes to the control plane machine set during or prior to installation is not supported. You must make any changes to the control plane machine set only after installation. 12.1.2. Additional resources Control Plane Machine Set Operator reference ControlPlaneMachineSet custom resource 12.2. Getting started with control plane machine sets The process for getting started with control plane machine sets depends on the state of the ControlPlaneMachineSet custom resource (CR) in your cluster. Clusters with an active generated CR Clusters that have a generated CR with an active state use the control plane machine set by default. No administrator action is required. Clusters with an inactive generated CR For clusters that include an inactive generated CR, you must review the CR configuration and activate the CR . Clusters without a generated CR For clusters that do not include a generated CR, you must create and activate a CR with the appropriate configuration for your cluster. If you are uncertain about the state of the ControlPlaneMachineSet CR in your cluster, you can verify the CR status . 12.2.1. Supported cloud providers In OpenShift Container Platform 4.16, the control plane machine set is supported for Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Nutanix, and VMware vSphere clusters. The status of the control plane machine set after installation depends on your cloud provider and the version of OpenShift Container Platform that you installed on your cluster. Table 12.1. Control plane machine set implementation for OpenShift Container Platform 4.16 Cloud provider Active by default Generated CR Manual CR required Amazon Web Services (AWS) X [1] X Google Cloud Platform (GCP) X [2] X Microsoft Azure X [2] X Nutanix X [3] X Red Hat OpenStack Platform (RHOSP) X [3] X VMware vSphere X [4] X AWS clusters that are upgraded from version 4.11 or earlier require CR activation . GCP and Azure clusters that are upgraded from version 4.12 or earlier require CR activation . Nutanix and RHOSP clusters that are upgraded from version 4.13 or earlier require CR activation . vSphere clusters that are upgraded from version 4.15 or earlier require CR activation . 12.2.2. Checking the control plane machine set custom resource state You can verify the existence and state of the ControlPlaneMachineSet custom resource (CR). Procedure Determine the state of the CR by running the following command: USD oc get controlplanemachineset.machine.openshift.io cluster \ --namespace openshift-machine-api A result of Active indicates that the ControlPlaneMachineSet CR exists and is activated. No administrator action is required. A result of Inactive indicates that a ControlPlaneMachineSet CR exists but is not activated. A result of NotFound indicates that there is no existing ControlPlaneMachineSet CR. steps To use the control plane machine set, you must ensure that a ControlPlaneMachineSet CR with the correct settings for your cluster exists. If your cluster has an existing CR, you must verify that the configuration in the CR is correct for your cluster. If your cluster does not have an existing CR, you must create one with the correct configuration for your cluster. 12.2.3. Activating the control plane machine set custom resource To use the control plane machine set, you must ensure that a ControlPlaneMachineSet custom resource (CR) with the correct settings for your cluster exists. On a cluster with a generated CR, you must verify that the configuration in the CR is correct for your cluster and activate it. Note For more information about the parameters in the CR, see "Control plane machine set configuration". Procedure View the configuration of the CR by running the following command: USD oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster Change the values of any fields that are incorrect for your cluster configuration. When the configuration is correct, activate the CR by setting the .spec.state field to Active and saving your changes. Important To activate the CR, you must change the .spec.state field to Active in the same oc edit session that you use to update the CR configuration. If the CR is saved with the state left as Inactive , the control plane machine set generator resets the CR to its original settings. Additional resources Control plane machine set configuration 12.2.4. Creating a control plane machine set custom resource To use the control plane machine set, you must ensure that a ControlPlaneMachineSet custom resource (CR) with the correct settings for your cluster exists. On a cluster without a generated CR, you must create the CR manually and activate it. Note For more information about the structure and parameters of the CR, see "Control plane machine set configuration". Procedure Create a YAML file using the following template: Control plane machine set CR YAML file template apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 1 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 2 strategy: type: RollingUpdate 3 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 4 <platform_failure_domains> 5 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> 6 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 7 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You must specify this value when you create a ControlPlaneMachineSet CR. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 Specify the state of the Operator. When the state is Inactive , the Operator is not operational. You can activate the Operator by setting the value to Active . Important Before you activate the CR, you must ensure that its configuration is correct for your cluster requirements. 3 Specify the update strategy for the cluster. Valid values are OnDelete and RollingUpdate . The default value is RollingUpdate . For more information about update strategies, see "Updating the control plane configuration". 4 Specify your cloud provider platform name. Valid values are AWS , Azure , GCP , Nutanix , VSphere , and OpenStack . 5 Add the <platform_failure_domains> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample failure domain configuration for your cloud provider. 6 Specify the infrastructure ID. 7 Add the <platform_provider_spec> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample provider specification for your cloud provider. Refer to the sample YAML for a control plane machine set CR and populate your file with values that are appropriate for your cluster configuration. Refer to the sample failure domain configuration and sample provider specification for your cloud provider and update those sections of your file with the appropriate values. When the configuration is correct, activate the CR by setting the .spec.state field to Active and saving your changes. Create the CR from your YAML file by running the following command: USD oc create -f <control_plane_machine_set>.yaml where <control_plane_machine_set> is the name of the YAML file that contains the CR configuration. Additional resources Updating the control plane configuration Control plane machine set configuration Provider-specific configuration options 12.3. Managing control plane machines with control plane machine sets Control plane machine sets automate several essential aspects of control plane management. 12.3.1. Updating the control plane configuration You can make changes to the configuration of the machines in the control plane by updating the specification in the control plane machine set custom resource (CR). The Control Plane Machine Set Operator monitors the control plane machines and compares their configuration with the specification in the control plane machine set CR. When there is a discrepancy between the specification in the CR and the configuration of a control plane machine, the Operator marks that control plane machine for replacement. Note For more information about the parameters in the CR, see "Control plane machine set configuration". Prerequisites Your cluster has an activated and functioning Control Plane Machine Set Operator. Procedure Edit your control plane machine set CR by running the following command: USD oc edit controlplanemachineset.machine.openshift.io cluster \ -n openshift-machine-api Change the values of any fields that you want to update in your cluster configuration. Save your changes. steps For clusters that use the default RollingUpdate update strategy, the control plane machine set propagates changes to your control plane configuration automatically. For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually. 12.3.1.1. Automatic updates to the control plane configuration The RollingUpdate update strategy automatically propagates changes to your control plane configuration. This update strategy is the default configuration for the control plane machine set. For clusters that use the RollingUpdate update strategy, the Operator creates a replacement control plane machine with the configuration that is specified in the CR. When the replacement control plane machine is ready, the Operator deletes the control plane machine that is marked for replacement. The replacement machine then joins the control plane. If multiple control plane machines are marked for replacement, the Operator protects etcd health during replacement by repeating this replacement process one machine at a time until it has replaced each machine. 12.3.1.2. Manual updates to the control plane configuration You can use the OnDelete update strategy to propagate changes to your control plane configuration by replacing machines manually. Manually replacing machines allows you to test changes to your configuration on a single machine before applying the changes more broadly. For clusters that are configured to use the OnDelete update strategy, the Operator creates a replacement control plane machine when you delete an existing machine. When the replacement control plane machine is ready, the etcd Operator allows the existing machine to be deleted. The replacement machine then joins the control plane. If multiple control plane machines are deleted, the Operator creates all of the required replacement machines simultaneously. The Operator maintains etcd health by preventing more than one machine being removed from the control plane at once. 12.3.2. Replacing a control plane machine To replace a control plane machine in a cluster that has a control plane machine set, you delete the machine manually. The control plane machine set replaces the deleted machine with one using the specification in the control plane machine set custom resource (CR). Prerequisites If your cluster runs on Red Hat OpenStack Platform (RHOSP) and you need to evacuate a compute server, such as for an upgrade, you must disable the RHOSP compute node that the machine runs on by running the following command: USD openstack compute service set <target_node_host_name> nova-compute --disable For more information, see Preparing to migrate in the RHOSP documentation. Procedure List the control plane machines in your cluster by running the following command: USD oc get machines \ -l machine.openshift.io/cluster-api-machine-role==master \ -n openshift-machine-api Delete a control plane machine by running the following command: USD oc delete machine \ -n openshift-machine-api \ <control_plane_machine_name> 1 1 Specify the name of the control plane machine to delete. Note If you delete multiple control plane machines, the control plane machine set replaces them according to the configured update strategy: For clusters that use the default RollingUpdate update strategy, the Operator replaces one machine at a time until each machine is replaced. For clusters that are configured to use the OnDelete update strategy, the Operator creates all of the required replacement machines simultaneously. Both strategies maintain etcd health during control plane machine replacement. 12.3.3. Additional resources Control plane machine set configuration Provider-specific configuration options 12.4. Control plane machine set configuration This example YAML snippet shows the base structure for a control plane machine set custom resource (CR). 12.4.1. Sample YAML for a control plane machine set custom resource The base of the ControlPlaneMachineSet CR is structured the same way for all platforms. Sample ControlPlaneMachineSet CR YAML file apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster 1 namespace: openshift-machine-api spec: replicas: 3 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 3 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 4 strategy: type: RollingUpdate 5 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 6 <platform_failure_domains> 7 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 8 1 Specifies the name of the ControlPlaneMachineSet CR, which is cluster . Do not change this value. 2 Specifies the number of control plane machines. Only clusters with three control plane machines are supported, so the replicas value is 3 . Horizontal scaling is not supported. Do not change this value. 3 Specifies the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You must specify this value when you create a ControlPlaneMachineSet CR. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 4 Specifies the state of the Operator. When the state is Inactive , the Operator is not operational. You can activate the Operator by setting the value to Active . Important Before you activate the Operator, you must ensure that the ControlPlaneMachineSet CR configuration is correct for your cluster requirements. For more information about activating the Control Plane Machine Set Operator, see "Getting started with control plane machine sets". 5 Specifies the update strategy for the cluster. The allowed values are OnDelete and RollingUpdate . The default value is RollingUpdate . For more information about update strategies, see "Updating the control plane configuration". 6 Specifies the cloud provider platform name. Do not change this value. 7 Specifies the <platform_failure_domains> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample failure domain configuration for your cloud provider. 8 Specifies the <platform_provider_spec> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample provider specification for your cloud provider. Additional resources Getting started with control plane machine sets Updating the control plane configuration 12.4.2. Provider-specific configuration options The <platform_provider_spec> and <platform_failure_domains> sections of the control plane machine set manifests are provider specific. For provider-specific configuration options for your cluster, see the following resources: Control plane configuration options for Amazon Web Services Control plane configuration options for Google Cloud Platform Control plane configuration options for Microsoft Azure Control plane configuration options for Nutanix Control plane configuration options for Red Hat OpenStack Platform (RHOSP) Control plane configuration options for VMware vSphere 12.5. Configuration options for control plane machines 12.5.1. Control plane configuration options for Amazon Web Services You can change the configuration of your Amazon Web Services (AWS) control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.1.1. Sample YAML for configuring Amazon Web Services clusters The following example YAML snippets show provider specification and failure domain configurations for an AWS cluster. 12.5.1.1.1. Sample AWS provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. You can omit any field that is set in the failure domain section of the CR. In the following example, <cluster_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Sample AWS providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: ami: id: ami-<ami_id_string> 1 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: 2 encrypted: true iops: 0 kmsKey: arn: "" volumeSize: 120 volumeType: gp3 credentialsSecret: name: aws-cloud-credentials 3 deviceIndex: 0 iamInstanceProfile: id: <cluster_id>-master-profile 4 instanceType: m6i.xlarge 5 kind: AWSMachineProviderConfig 6 loadBalancers: 7 - name: <cluster_id>-int type: network - name: <cluster_id>-ext type: network metadata: creationTimestamp: null metadataServiceOptions: {} placement: 8 region: <region> 9 availabilityZone: "" 10 tenancy: 11 securityGroups: - filters: - name: tag:Name values: - <cluster_id>-master-sg 12 subnet: {} 13 userDataSecret: name: master-user-data 14 1 Specifies the Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Images (AMI) ID for the cluster. The AMI must belong to the same region as the cluster. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. 2 Specifies the configuration of an encrypted EBS volume. 3 Specifies the secret name for the cluster. Do not change this value. 4 Specifies the AWS Identity and Access Management (IAM) instance profile. Do not change this value. 5 Specifies the AWS instance type for the control plane. 6 Specifies the cloud provider platform type. Do not change this value. 7 Specifies the internal ( int ) and external ( ext ) load balancers for the cluster. Note You can omit the external ( ext ) load balancer parameters on private OpenShift Container Platform clusters. 8 Specifies where to create the control plane instance in AWS. 9 Specifies the AWS region for the cluster. 10 This parameter is configured in the failure domain and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Control Plane Machine Set Operator overwrites it with the value in the failure domain. 11 Specifies the AWS Dedicated Instance configuration for the control plane. For more information, see AWS documentation about Dedicated Instances . The following values are valid: default : The Dedicated Instance runs on shared hardware. dedicated : The Dedicated Instance runs on single-tenant hardware. host : The Dedicated Instance runs on a Dedicated Host, which is an isolated server with configurations that you can control. 12 Specifies the control plane machines security group. 13 This parameter is configured in the failure domain and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Control Plane Machine Set Operator overwrites it with the value in the failure domain. Note If the failure domain configuration does not specify a value, the value in the provider specification is used. Configuring a subnet in the failure domain overwrites the subnet value in the provider specification. 14 Specifies the control plane user data secret. Do not change this value. 12.5.1.1.2. Sample AWS failure domain configuration The control plane machine set concept of a failure domain is analogous to existing AWS concept of an Availability Zone (AZ) . The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible. When configuring AWS failure domains in the control plane machine set, you must specify the availability zone name and the subnet to use. Sample AWS failure domain values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... machines_v1beta1_machine_openshift_io: failureDomains: aws: - placement: availabilityZone: <aws_zone_a> 1 subnet: 2 filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_a> 3 type: Filters 4 - placement: availabilityZone: <aws_zone_b> 5 subnet: filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_b> 6 type: Filters platform: AWS 7 # ... 1 Specifies an AWS availability zone for the first failure domain. 2 Specifies a subnet configuration. In this example, the subnet type is Filters , so there is a filters stanza. 3 Specifies the subnet name for the first failure domain, using the infrastructure ID and the AWS availability zone. 4 Specifies the subnet type. The allowed values are: ARN , Filters and ID . The default value is Filters . 5 Specifies the subnet name for an additional failure domain, using the infrastructure ID and the AWS availability zone. 6 Specifies the cluster's infrastructure ID and the AWS availability zone for the additional failure domain. 7 Specifies the cloud provider platform name. Do not change this value. 12.5.1.2. Enabling Amazon Web Services features for control plane machines You can enable features by updating values in the control plane machine set. 12.5.1.2.1. Restricting the API server to private After you deploy a cluster to Amazon Web Services (AWS), you can reconfigure the API server to use only the private zone. Prerequisites Install the OpenShift CLI ( oc ). Have access to the web console as a user with admin privileges. Procedure In the web portal or console for your cloud provider, take the following actions: Locate and delete the appropriate load balancer component: For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer. Delete the api.USDclustername.USDyourdomain DNS entry in the public zone. Remove the external load balancers by deleting the following indicated lines in the control plane machine set custom resource: # ... providerSpec: value: # ... loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network # ... 1 Delete the name value for the external load balancer, which ends in -ext . 2 Delete the type value for the external load balancer. Additional resources Configuring the Ingress Controller endpoint publishing scope to Internal 12.5.1.2.2. Changing the Amazon Web Services instance type by using a control plane machine set You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the specification in the control plane machine set custom resource (CR). Prerequisites Your AWS cluster uses a control plane machine set. Procedure Edit the following line under the providerSpec field: providerSpec: value: ... instanceType: <compatible_aws_instance_type> 1 1 Specify a larger AWS instance type with the same base as the selection. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge . Save your changes. 12.5.1.2.3. Assigning machines to placement groups for Elastic Fabric Adapter instances by using machine sets You can configure a machine set to deploy machines on Elastic Fabric Adapter (EFA) instances within an existing AWS placement group. EFA instances do not require placement groups, and you can use placement groups for purposes other than configuring an EFA. This example uses both to demonstrate a configuration that can improve network performance for machines within the specified placement group. Prerequisites You created a placement group in the AWS console. Note Ensure that the rules and limitations for the type of placement group that you create are compatible with your intended use case. The control plane machine set spreads the control plane machines across multiple failure domains when possible. To use placement groups for the control plane, you must use a placement group type that can span multiple Availability Zones. Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following lines under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5 # ... 1 Specify an instance type that supports EFAs . 2 Specify the EFA network interface type. 3 Specify the zone, for example, us-east-1a . 4 Specify the region, for example, us-east-1 . 5 Specify the name of the existing AWS placement group to deploy machines in. Verification In the AWS console, find a machine that the machine set created and verify the following in the machine properties: The placement group field has the value that you specified for the placementGroupName parameter in the machine set. The interface type field indicates that it uses an EFA. 12.5.1.2.4. Machine set options for the Amazon EC2 Instance Metadata Service You can use machine sets to create machines that use a specific version of the Amazon EC2 Instance Metadata Service (IMDS). Machine sets can create machines that allow the use of both IMDSv1 and IMDSv2 or machines that require the use of IMDSv2. Note Using IMDSv2 is only supported on AWS clusters that were created with OpenShift Container Platform version 4.7 or later. Important Before configuring a machine set to create machines that require IMDSv2, ensure that any workloads that interact with the AWS metadata service support IMDSv2. 12.5.1.2.4.1. Configuring IMDS by using machine sets You can specify whether to require the use of IMDSv2 by adding or editing the value of metadataServiceOptions.authentication in the machine set YAML file for your machines. Prerequisites To use IMDSv2, your AWS cluster must have been created with OpenShift Container Platform version 4.7 or later. Procedure Add or edit the following lines under the providerSpec field: providerSpec: value: metadataServiceOptions: authentication: Required 1 1 To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. 12.5.1.2.5. Machine sets that deploy machines as Dedicated Instances You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account. Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware. 12.5.1.2.5.1. Creating Dedicated Instances by using machine sets You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy field in your machine set YAML file to launch a Dedicated Instance on AWS. Procedure Specify a dedicated tenancy under the providerSpec field: providerSpec: placement: tenancy: dedicated 12.5.2. Control plane configuration options for Microsoft Azure You can change the configuration of your Microsoft Azure control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.2.1. Sample YAML for configuring Microsoft Azure clusters The following example YAML snippets show provider specification and failure domain configurations for an Azure cluster. 12.5.2.1.1. Sample Azure provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane Machine CR that is created by the installation program. You can omit any field that is set in the failure domain section of the CR. In the following example, <cluster_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Sample Azure providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials 1 namespace: openshift-machine-api diagnostics: {} image: 2 offer: "" publisher: "" resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 3 sku: "" version: "" internalLoadBalancer: <cluster_id>-internal 4 kind: AzureMachineProviderSpec 5 location: <region> 6 managedIdentity: <cluster_id>-identity metadata: creationTimestamp: null name: <cluster_id> networkResourceGroup: <cluster_id>-rg osDisk: 7 diskSettings: {} diskSizeGB: 1024 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <cluster_id> 8 resourceGroup: <cluster_id>-rg subnet: <cluster_id>-master-subnet 9 userDataSecret: name: master-user-data 10 vmSize: Standard_D8s_v3 vnet: <cluster_id>-vnet zone: "1" 11 1 Specifies the secret name for the cluster. Do not change this value. 2 Specifies the image details for your control plane machine set. 3 Specifies an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 4 Specifies the internal load balancer for the control plane. This field might not be preconfigured but is required in both the ControlPlaneMachineSet and control plane Machine CRs. 5 Specifies the cloud provider platform type. Do not change this value. 6 Specifies the region to place control plane machines on. 7 Specifies the disk configuration for the control plane. 8 Specifies the public load balancer for the control plane. Note You can omit the publicLoadBalancer parameter on private OpenShift Container Platform clusters that have user-defined outbound routing. 9 Specifies the subnet for the control plane. 10 Specifies the control plane user data secret. Do not change this value. 11 Specifies the zone configuration for clusters that use a single zone for all failure domains. Note If the cluster is configured to use a different zone for each failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using different zones for each failure domain, the Control Plane Machine Set Operator ignores it. 12.5.2.1.2. Sample Azure failure domain configuration The control plane machine set concept of a failure domain is analogous to existing Azure concept of an Azure availability zone . The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible. When configuring Azure failure domains in the control plane machine set, you must specify the availability zone name. An Azure cluster uses a single subnet that spans multiple zones. Sample Azure failure domain values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... machines_v1beta1_machine_openshift_io: failureDomains: azure: - zone: "1" 1 - zone: "2" - zone: "3" platform: Azure 2 # ... 1 Each instance of zone specifies an Azure availability zone for a failure domain. Note If the cluster is configured to use a single zone for all failure domains, the zone parameter is configured in the provider specification instead of in the failure domain configuration. 2 Specifies the cloud provider platform name. Do not change this value. 12.5.2.2. Enabling Microsoft Azure features for control plane machines You can enable features by updating values in the control plane machine set. 12.5.2.2.1. Restricting the API server to private After you deploy a cluster to Amazon Web Services (AWS), you can reconfigure the API server to use only the private zone. Prerequisites Install the OpenShift CLI ( oc ). Have access to the web console as a user with admin privileges. Procedure In the web portal or console for your cloud provider, take the following actions: Locate and delete the appropriate load balancer component: Delete the api.USDclustername.USDyourdomain DNS entry in the public zone. Remove the external load balancers by deleting the following indicated lines in the control plane machine set custom resource: # ... providerSpec: value: # ... loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network # ... 1 Delete the name value for the external load balancer, which ends in -ext . 2 Delete the type value for the external load balancer. Additional resources Configuring the Ingress Controller endpoint publishing scope to Internal 12.5.2.2.2. Using the Azure Marketplace offering You can create a machine set running on Azure that deploys machines that use the Azure Marketplace offering. To use this offering, you must first obtain the Azure Marketplace image. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 Note Use the latest image that is available for compute and control plane nodes. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer, specifically the values for publisher , offer , sku , and version . Add the following parameters to the providerSpec section of your machine set YAML file using the image details for your offer: Sample providerSpec image values for Azure Marketplace machines providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: "" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700 12.5.2.2.3. Enabling Azure boot diagnostics You can enable boot diagnostics on Azure machines that your machine set creates. Prerequisites Have an existing Microsoft Azure cluster. Procedure Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file: For an Azure Managed storage account: providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1 1 Specifies an Azure Managed storage account. For an Azure Unmanaged storage account: providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2 1 Specifies an Azure Unmanaged storage account. 2 Replace <storage-account> with the name of your storage account. Note Only the Azure Blob Storage data service is supported. Verification On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine. 12.5.2.2.4. Machine sets that deploy machines with ultra disks as data disks You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads. Additional resources Microsoft Azure ultra disks documentation 12.5.2.2.4.1. Creating machines with ultra disks by using machine sets You can deploy machines with ultra disks on Azure by editing your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Create a custom secret in the openshift-machine-api namespace using the master data secret by running the following command: USD oc -n openshift-machine-api \ get secret <role>-user-data \ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2 1 Replace <role> with master . 2 Specify userData.txt as the name of the new custom secret. In a text editor, open the userData.txt file and locate the final } character in the file. On the immediately preceding line, add a , . Create a new line after the , and add the following configuration details: "storage": { "disks": [ 1 { "device": "/dev/disk/azure/scsi1/lun0", 2 "partitions": [ 3 { "label": "lun0p1", 4 "sizeMiB": 1024, 5 "startMiB": 0 } ] } ], "filesystems": [ 6 { "device": "/dev/disk/by-partlabel/lun0p1", "format": "xfs", "path": "/var/lib/lun0p1" } ] }, "systemd": { "units": [ 7 { "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var/lib/lun0p1\nWhat=/dev/disk/by-partlabel/lun0p1\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", 8 "enabled": true, "name": "var-lib-lun0p1.mount" } ] } 1 The configuration details for the disk that you want to attach to a node as an ultra disk. 2 Specify the lun value that is defined in the dataDisks stanza of the machine set you are using. For example, if the machine set contains lun: 0 , specify lun0 . You can initialize multiple data disks by specifying multiple "disks" entries in this configuration file. If you specify multiple "disks" entries, ensure that the lun value for each matches the value in the machine set. 3 The configuration details for a new partition on the disk. 4 Specify a label for the partition. You might find it helpful to use hierarchical names, such as lun0p1 for the first partition of lun0 . 5 Specify the total size in MiB of the partition. 6 Specify the filesystem to use when formatting a partition. Use the partition label to specify the partition. 7 Specify a systemd unit to mount the partition at boot. Use the partition label to specify the partition. You can create multiple partitions by specifying multiple "partitions" entries in this configuration file. If you specify multiple "partitions" entries, you must specify a systemd unit for each. 8 For Where , specify the value of storage.filesystems.path . For What , specify the value of storage.filesystems.device . Extract the disabling template value to a file called disableTemplating.txt by running the following command: USD oc -n openshift-machine-api get secret <role>-user-data \ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt 1 Replace <role> with master . Combine the userData.txt file and disableTemplating.txt file to create a data secret file by running the following command: USD oc -n openshift-machine-api create secret generic <role>-user-data-x5 \ 1 --from-file=userData=userData.txt \ --from-file=disableTemplating=disableTemplating.txt 1 For <role>-user-data-x5 , specify the name of the secret. Replace <role> with master . Edit your control plane machine set CR by running the following command: USD oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster Add the following lines in the positions indicated: apiVersion: machine.openshift.io/v1beta1 kind: ControlPlaneMachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4 1 Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value. 2 3 These lines enable the use of ultra disks. For dataDisks , include the entire stanza. 4 Specify the user data secret created earlier. Replace <role> with master . Save your changes. For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration. For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually. Verification Validate that the machines are created by running the following command: USD oc get machines The machines should be in the Running state. For a machine that is running and has a node attached, validate the partition by running the following command: USD oc debug node/<node-name> -- chroot /host lsblk In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with -- . The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine. steps To use an ultra disk on the control plane, reconfigure your workload to use the control plane's ultra disk mount point. 12.5.2.2.4.2. Troubleshooting resources for machine sets that enable ultra disks Use the information in this section to understand and recover from issues you might encounter. 12.5.2.2.4.2.1. Incorrect ultra disk configuration If an incorrect configuration of the ultraSSDCapability parameter is specified in the machine set, the machine provisioning fails. For example, if the ultraSSDCapability parameter is set to Disabled , but an ultra disk is specified in the dataDisks parameter, the following error message appears: StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set. To resolve this issue, verify that your machine set configuration is correct. 12.5.2.2.4.2.2. Unsupported disk parameters If a region, availability zone, or instance size that is not compatible with ultra disks is specified in the machine set, the machine provisioning fails. Check the logs for the following error message: failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>." To resolve this issue, verify that you are using this feature in a supported environment and that your machine set configuration is correct. 12.5.2.2.4.2.3. Unable to delete disks If the deletion of ultra disks as data disks is not working as expected, the machines are deleted and the data disks are orphaned. You must delete the orphaned disks manually if desired. 12.5.2.2.5. Enabling customer-managed encryption keys for a machine set You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API. An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set. Prerequisites Create an Azure Key Vault instance . Create an instance of a disk encryption set . Grant the disk encryption set access to key vault . Procedure Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example: providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS Additional resources Azure documentation about customer-managed keys 12.5.2.2.6. Configuring trusted launch for Azure virtual machines by using machine sets Important Using trusted launch for Azure virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.16 supports trusted launch for Azure virtual machines (VMs). By editing the machine set YAML file, you can configure the trusted launch options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. Note Some feature combinations result in an invalid configuration. Table 12.2. UEFI feature combination compatibility Secure Boot [1] vTPM [2] Valid configuration Enabled Enabled Yes Enabled Disabled Yes Enabled Omitted Yes Disabled Enabled Yes Omitted Enabled Yes Disabled Disabled No Omitted Disabled No Omitted Omitted No Using the secureBoot field. Using the virtualizedTrustedPlatformModule field. For more information about related features and functionality, see the Microsoft Azure documentation about Trusted launch for Azure virtual machines . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field to provide a valid configuration: Sample valid configuration with UEFI Secure Boot and vTPM enabled apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ... 1 Enables the use of trusted launch for Azure virtual machines. This value is required for all valid configurations. 2 Specifies which UEFI security features to use. This section is required for all valid configurations. 3 Enables UEFI Secure Boot. 4 Enables the use of a vTPM. Verification On the Azure portal, review the details for a machine deployed by the machine set and verify that the trusted launch options match the values that you configured. 12.5.2.2.7. Configuring Azure confidential virtual machines by using machine sets Important Using Azure confidential virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.16 supports Azure confidential virtual machines (VMs). Note Confidential VMs are currently not supported on 64-bit ARM architectures. By editing the machine set YAML file, you can configure the confidential VM options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. Warning Not all instance types support confidential VMs. Do not change the instance type for a control plane machine set that is configured to use confidential VMs to a type that is incompatible. Using an incompatible instance type can cause your cluster to become unstable. For more information about related features and functionality, see the Microsoft Azure documentation about Confidential virtual machines . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: Sample configuration apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: spec: providerSpec: value: osDisk: # ... managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # ... securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8 # ... 1 Specifies security profile settings for the managed disk when using a confidential VM. 2 Enables encryption of the Azure VM Guest State (VMGS) blob. This setting requires the use of vTPM. 3 Specifies security profile settings for the confidential VM. 4 Enables the use of confidential VMs. This value is required for all valid configurations. 5 Specifies which UEFI security features to use. This section is required for all valid configurations. 6 Disables UEFI Secure Boot. 7 Enables the use of a vTPM. 8 Specifies an instance type that supports confidential VMs. Verification On the Azure portal, review the details for a machine deployed by the machine set and verify that the confidential VM options match the values that you configured. 12.5.2.2.8. Accelerated Networking for Microsoft Azure VMs Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. This enhances network performance. This feature can be enabled after installation. 12.5.2.2.8.1. Limitations Consider the following limitations when deciding whether to use Accelerated Networking: Accelerated Networking is only supported on clusters where the Machine API is operational. Accelerated Networking requires an Azure VM size that includes at least four vCPUs. To satisfy this requirement, you can change the value of vmSize in your machine set. For information about Azure VM sizes, see Microsoft Azure documentation . 12.5.2.2.9. Configuring Capacity Reservation by using machine sets OpenShift Container Platform version 4.16.3 and later supports on-demand Capacity Reservation with Capacity Reservation groups on Microsoft Azure clusters. You can configure a machine set to deploy machines on any available resources that match the parameters of a capacity request that you define. These parameters specify the VM size, region, and number of instances that you want to reserve. If your Azure subscription quota can accommodate the capacity request, the deployment succeeds. For more information, including limitations and suggested use cases for this Azure instance type, see the Microsoft Azure documentation about On-demand Capacity Reservation . Note You cannot change an existing Capacity Reservation configuration for a machine set. To use a different Capacity Reservation group, you must replace the machine set and the machines that the machine set deployed. Prerequisites You have access to the cluster with cluster-admin privileges. You installed the OpenShift CLI ( oc ). You created a Capacity Reservation group. For more information, see the Microsoft Azure documentation Create a Capacity Reservation . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: Sample configuration apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1 # ... 1 Specify the ID of the Capacity Reservation group that you want the machine set to deploy machines on. Verification To verify machine deployment, list the machines that the machine set created by running the following command: USD oc get machine \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machine-role=master In the output, verify that the characteristics of the listed machines match the parameters of your Capacity Reservation. 12.5.2.2.9.1. Enabling Accelerated Networking on an existing Microsoft Azure cluster You can enable Accelerated Networking on Azure by adding acceleratedNetworking to your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster where the Machine API is operational. Procedure Add the following to the providerSpec field: providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2 1 This line enables Accelerated Networking. 2 Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation . Verification On the Microsoft Azure portal, review the Networking settings page for a machine provisioned by the machine set, and verify that the Accelerated networking field is set to Enabled . 12.5.3. Control plane configuration options for Google Cloud Platform You can change the configuration of your Google Cloud Platform (GCP) control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.3.1. Sample YAML for configuring Google Cloud Platform clusters The following example YAML snippets show provider specification and failure domain configurations for a GCP cluster. 12.5.3.1.1. Sample GCP provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. You can omit any field that is set in the failure domain section of the CR. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI. Infrastructure ID The <cluster_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Image path The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.disks[0].image}{"\n"}' \ get ControlPlaneMachineSet/cluster Sample GCP providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials 1 deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 2 labels: null sizeGb: 200 type: pd-ssd kind: GCPMachineProviderSpec 3 machineType: e2-standard-4 metadata: creationTimestamp: null metadataServiceOptions: {} networkInterfaces: - network: <cluster_id>-network subnetwork: <cluster_id>-master-subnet projectID: <project_name> 4 region: <region> 5 serviceAccounts: 6 - email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform shieldedInstanceConfig: {} tags: - <cluster_id>-master targetPools: - <cluster_id>-api userDataSecret: name: master-user-data 7 zone: "" 8 1 Specifies the secret name for the cluster. Do not change this value. 2 Specifies the path to the image that was used to create the disk. To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 3 Specifies the cloud provider platform type. Do not change this value. 4 Specifies the name of the GCP project that you use for your cluster. 5 Specifies the GCP region for the cluster. 6 Specifies a single service account. Multiple service accounts are not supported. 7 Specifies the control plane user data secret. Do not change this value. 8 This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain. 12.5.3.1.2. Sample GCP failure domain configuration The control plane machine set concept of a failure domain is analogous to the existing GCP concept of a zone . The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible. When configuring GCP failure domains in the control plane machine set, you must specify the zone name to use. Sample GCP failure domain values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... machines_v1beta1_machine_openshift_io: failureDomains: gcp: - zone: <gcp_zone_a> 1 - zone: <gcp_zone_b> 2 - zone: <gcp_zone_c> - zone: <gcp_zone_d> platform: GCP 3 # ... 1 Specifies a GCP zone for the first failure domain. 2 Specifies an additional failure domain. Further failure domains are added the same way. 3 Specifies the cloud provider platform name. Do not change this value. 12.5.3.2. Enabling Google Cloud Platform features for control plane machines You can enable features by updating values in the control plane machine set. 12.5.3.2.1. Configuring persistent disk types by using machine sets You can configure the type of persistent disk that a machine set deploys machines on by editing the machine set YAML file. For more information about persistent disk types, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about persistent disks . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following line under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet ... spec: template: spec: providerSpec: value: disks: type: pd-ssd 1 1 Control plane nodes must use the pd-ssd disk type. Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Type field matches the configured disk type. 12.5.3.2.2. Configuring Confidential VM by using machine sets By editing the machine set YAML file, you can configure the Confidential VM options that a machine set uses for machines that it deploys. For more information about Confidential VM features, functions, and compatibility, see the GCP Compute Engine documentation about Confidential VM . Note Confidential VMs are currently not supported on 64-bit ARM architectures. Important OpenShift Container Platform 4.16 does not support some Confidential Compute features, such as Confidential VMs with AMD Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP). Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet ... spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3 ... 1 Specify whether Confidential VM is enabled. Valid values are Disabled or Enabled . 2 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VM does not support live VM migration. 3 Specify a machine type that supports Confidential VM. Confidential VM supports the N2D and C2D series of machine types. Verification On the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Confidential VM options match the values that you configured. 12.5.3.2.3. Configuring Shielded VM options by using machine sets By editing the machine set YAML file, you can configure the Shielded VM options that a machine set uses for machines that it deploys. For more information about Shielded VM features and functionality, see the GCP Compute Engine documentation about Shielded VM . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ... 1 In this section, specify any Shielded VM options that you want. 2 Specify whether integrity monitoring is enabled. Valid values are Disabled or Enabled . Note When integrity monitoring is enabled, you must not disable virtual trusted platform module (vTPM). 3 Specify whether UEFI Secure Boot is enabled. Valid values are Disabled or Enabled . 4 Specify whether vTPM is enabled. Valid values are Disabled or Enabled . Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Shielded VM options match the values that you configured. Additional resources What is Shielded VM? Secure Boot Virtual Trusted Platform Module (vTPM) Integrity monitoring 12.5.3.2.4. Enabling customer-managed encryption keys for a machine set Google Cloud Platform (GCP) Compute Engine allows users to supply an encryption key to encrypt data on disks at rest. The key is used to encrypt the data encryption key, not to encrypt the customer's data. By default, Compute Engine encrypts this data by using Compute Engine keys. You can enable encryption with a customer-managed key in clusters that use the Machine API. You must first create a KMS key and assign the correct permissions to a service account. The KMS key name, key ring name, and location are required to allow a service account to use your key. Note If you do not want to use a dedicated service account for the KMS encryption, the Compute Engine default service account is used instead. You must grant the default service account permission to access the keys if you do not use a dedicated service account. The Compute Engine default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. Procedure To allow a specific service account to use your KMS key and to grant the service account the correct IAM role, run the following command with your KMS key name, key ring name, and location: USD gcloud kms keys add-iam-policy-binding <key_name> \ --keyring <key_ring_name> \ --location <key_ring_location> \ --member "serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com" \ --role roles/cloudkms.cryptoKeyEncrypterDecrypter Configure the encryption key under the providerSpec field in your machine set YAML file. For example: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet ... spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5 1 The name of the customer-managed encryption key that is used for the disk encryption. 2 The name of the KMS key ring that the KMS key belongs to. 3 The GCP location in which the KMS key ring exists. 4 Optional: The ID of the project in which the KMS key ring exists. If a project ID is not set, the machine set projectID in which the machine set was created is used. 5 Optional: The service account that is used for the encryption request for the given KMS key. If a service account is not set, the Compute Engine default service account is used. When a new machine is created by using the updated providerSpec object configuration, the disk encryption key is encrypted with the KMS key. 12.5.4. Control plane configuration options for Nutanix You can change the configuration of your Nutanix control plane machines by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.4.1. Sample YAML for configuring Nutanix clusters The following example YAML snippet shows a provider specification configuration for a Nutanix cluster. 12.5.4.1.1. Sample Nutanix provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI. Infrastructure ID The <cluster_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Sample Nutanix providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: "" 1 categories: 2 - key: <category_name> value: <category_value> cluster: 3 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials 4 image: 5 name: <cluster_id>-rhcos type: name kind: NutanixMachineProviderConfig 6 memorySize: 16Gi 7 metadata: creationTimestamp: null project: 8 type: name name: <project_name> subnets: 9 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 10 userDataSecret: name: master-user-data 11 vcpuSockets: 8 12 vcpusPerSocket: 1 13 1 Specifies the boot type that the control plane machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Valid values are Legacy , SecureBoot , or UEFI . The default is Legacy . Note You must use the Legacy boot type in OpenShift Container Platform 4.16. 2 Specifies one or more Nutanix Prism categories to apply to control plane machines. This stanza requires key and value parameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management . 3 Specifies a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid , so there is a uuid stanza. Note Clusters that use OpenShift Container Platform version 4.15 or later can use failure domain configurations. If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it. 4 Specifies the secret name for the cluster. Do not change this value. 5 Specifies the image that was used to create the disk. 6 Specifies the cloud provider platform type. Do not change this value. 7 Specifies the memory allocated for the control plane machines. 8 Specifies the Nutanix project that you use for your cluster. In this example, the project type is name , so there is a name stanza. 9 Specifies a subnet configuration. In this example, the subnet type is uuid , so there is a uuid stanza. Note Clusters that use OpenShift Container Platform version 4.15 or later can use failure domain configurations. If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it. 10 Specifies the VM disk size for the control plane machines. 11 Specifies the control plane user data secret. Do not change this value. 12 Specifies the number of vCPU sockets allocated for the control plane machines. 13 Specifies the number of vCPUs for each control plane vCPU socket. 12.5.4.1.2. Failure domains for Nutanix clusters To add or update the failure domain configuration on a Nutanix cluster, you must make coordinated changes to several resources. The following actions are required: Modify the cluster infrastructure custom resource (CR). Modify the cluster control plane machine set CR. Modify or replace the compute machine set CRs. For more information, see "Adding failure domains to an existing Nutanix cluster" in the Post-installation configuration content. Additional resources Adding failure domains to an existing Nutanix cluster 12.5.5. Control plane configuration options for Red Hat OpenStack Platform You can change the configuration of your Red Hat OpenStack Platform (RHOSP) control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.5.1. Sample YAML for configuring Red Hat OpenStack Platform (RHOSP) clusters The following example YAML snippets show provider specification and failure domain configurations for an RHOSP cluster. 12.5.5.1.1. Sample RHOSP provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. Sample OpenStack providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials 1 namespace: openshift-machine-api flavor: m1.xlarge 2 image: ocp1-2g2xs-rhcos kind: OpenstackProviderSpec 3 metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: ocp1-2g2xs-nodes tags: openshiftClusterID=ocp1-2g2xs securityGroups: - filter: {} name: ocp1-2g2xs-master 4 serverGroupName: ocp1-2g2xs-master serverMetadata: Name: ocp1-2g2xs-master openshiftClusterID: ocp1-2g2xs tags: - openshiftClusterID=ocp1-2g2xs trunk: true userDataSecret: name: master-user-data 1 The secret name for the cluster. Do not change this value. 2 The RHOSP flavor type for the control plane. 3 The RHOSP cloud provider platform type. Do not change this value. 4 The control plane machines security group. 12.5.5.1.2. Sample RHOSP failure domain configuration The control plane machine set concept of a failure domain is analogous to the existing Red Hat OpenStack Platform (RHOSP) concept of an availability zone . The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible. The following example demonstrates the use of multiple Nova availability zones as well as Cinder availability zones. Sample OpenStack failure domain values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... machines_v1beta1_machine_openshift_io: failureDomains: platform: OpenStack openstack: - availabilityZone: nova-az0 rootVolume: availabilityZone: cinder-az0 - availabilityZone: nova-az1 rootVolume: availabilityZone: cinder-az1 - availabilityZone: nova-az2 rootVolume: availabilityZone: cinder-az2 # ... 12.5.5.2. Enabling Red Hat OpenStack Platform (RHOSP) features for control plane machines You can enable features by updating values in the control plane machine set. 12.5.5.2.1. Changing the RHOSP compute flavor by using a control plane machine set You can change the Red Hat OpenStack Platform (RHOSP) compute service (Nova) flavor that your control plane machines use by updating the specification in the control plane machine set custom resource. In RHOSP, flavors define the compute, memory, and storage capacity of computing instances. By increasing or decreasing the flavor size, you can scale your control plane vertically. Prerequisites Your RHOSP cluster uses a control plane machine set. Procedure Edit the following line under the providerSpec field: providerSpec: value: # ... flavor: m1.xlarge 1 1 Specify a RHOSP flavor type that has the same base as the existing selection. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge . You can choose larger or smaller flavors depending on your vertical scaling needs. Save your changes. After you save your changes, machines are replaced with ones that use the flavor you chose. 12.5.6. Control plane configuration options for VMware vSphere You can change the configuration of your VMware vSphere control plane machines by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.6.1. Sample YAML for configuring VMware vSphere clusters The following example YAML snippets show provider specification and failure domain configurations for a vSphere cluster. 12.5.6.1.1. Sample VMware vSphere provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. Sample vSphere providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 2 kind: VSphereMachineProviderSpec 3 memoryMiB: 16384 4 metadata: creationTimestamp: null network: 5 devices: - networkName: <vm_network_name> numCPUs: 4 6 numCoresPerSocket: 4 7 snapshot: "" template: <vm_template_name> 8 userDataSecret: name: master-user-data 9 workspace: 10 datacenter: <vcenter_data_center_name> 11 datastore: <vcenter_datastore_name> 12 folder: <path_to_vcenter_vm_folder> 13 resourcePool: <vsphere_resource_pool> 14 server: <vcenter_server_ip> 15 1 Specifies the secret name for the cluster. Do not change this value. 2 Specifies the VM disk size for the control plane machines. 3 Specifies the cloud provider platform type. Do not change this value. 4 Specifies the memory allocated for the control plane machines. 5 Specifies the network on which the control plane is deployed. Note If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it. 6 Specifies the number of CPUs allocated for the control plane machines. 7 Specifies the number of cores for each control plane CPU. 8 Specifies the vSphere VM template to use, such as user-5ddjd-rhcos . Note If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it. 9 Specifies the control plane user data secret. Do not change this value. 10 Specifies the workspace details for the control plane. Note If the cluster is configured to use a failure domain, these parameters are configured in the failure domain. If you specify these values in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores them. 11 Specifies the vCenter data center for the control plane. 12 Specifies the vCenter datastore for the control plane. 13 Specifies the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 14 Specifies the vSphere resource pool for your VMs. 15 Specifies the vCenter server IP or fully qualified domain name. 12.5.6.1.2. Sample VMware vSphere failure domain configuration On VMware vSphere infrastructure, the cluster-wide infrastructure Custom Resource Definition (CRD), infrastructures.config.openshift.io , defines failure domains for your cluster. The providerSpec in the ControlPlaneMachineSet custom resource (CR) specifies names for failure domains that the control plane machine set uses to ensure control plane nodes are deployed to the appropriate failure domain. A failure domain is an infrastructure resource made up of a control plane machine set, a vCenter data center, vCenter datastore, and a network. By using a failure domain resource, you can use a control plane machine set to deploy control plane machines on separate clusters or data centers. A control plane machine set also balances control plane machines across defined failure domains to provide fault tolerance capabilities to your infrastructure. Note If you modify the ProviderSpec configuration in the ControlPlaneMachineSet CR, the control plane machine set updates all control plane machines deployed on the primary infrastructure and each failure domain infrastructure. Sample VMware vSphere failure domain values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... machines_v1beta1_machine_openshift_io: failureDomains: 1 platform: VSphere vsphere: 2 - name: <failure_domain_name1> - name: <failure_domain_name2> # ... 1 Specifies the vCenter location for OpenShift Container Platform cluster nodes. 2 Specifies failure domains by name for the control plane machine set. Important Each name field value in this section must match the corresponding value in the failureDomains.name field of the cluster-wide infrastructure CRD. You can find the value of the failureDomains.name field by running the following command: USD oc get infrastructure cluster -o=jsonpath={.spec.platformSpec.vsphere.failureDomains[0].name} The name field is the only supported failure domain field that you can specify in the ControlPlaneMachineSet CR. For an example of a cluster-wide infrastructure CRD that defines resources for each failure domain, see "Specifying multiple regions and zones for your cluster on vSphere." Additional resources Specifying multiple regions and zones for your cluster on vSphere 12.5.6.2. Enabling VMware vSphere features for control plane machines You can enable features by updating values in the control plane machine set. 12.5.6.2.1. Adding tags to machines by using machine sets OpenShift Container Platform adds a cluster-specific tag to each virtual machine (VM) that it creates. The installation program uses these tags to select the VMs to delete when uninstalling a cluster. In addition to the cluster-specific tags assigned to VMs, you can configure a machine set to add up to 10 additional vSphere tags to the VMs it provisions. Prerequisites You have access to an OpenShift Container Platform cluster installed on vSphere using an account with cluster-admin permissions. You have access to the VMware vCenter console associated with your cluster. You have created a tag in the vCenter console. You have installed the OpenShift CLI ( oc ). Procedure Use the vCenter console to find the tag ID for any tag that you want to add to your machines: Log in to the vCenter console. From the Home menu, click Tags & Custom Attributes . Select a tag that you want to add to your machines. Use the browser URL for the tag that you select to identify the tag ID. Example tag URL https://vcenter.example.com/ui/app/tags/tag/urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL/permissions Example tag ID urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following lines under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: spec: providerSpec: value: tagIDs: 1 - <tag_id_value> 2 # ... 1 Specify a list of up to 10 tags to add to the machines that this machine set provisions. 2 Specify the value of the tag that you want to add to your machines. For example, urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL . 12.6. Control plane resiliency and recovery You can use the control plane machine set to improve the resiliency of the control plane for your OpenShift Container Platform cluster. 12.6.1. High availability and fault tolerance with failure domains When possible, the control plane machine set spreads the control plane machines across multiple failure domains. This configuration provides high availability and fault tolerance within the control plane. This strategy can help protect the control plane when issues arise within the infrastructure provider. 12.6.1.1. Failure domain platform support and configuration The control plane machine set concept of a failure domain is analogous to existing concepts on cloud providers. Not all platforms support the use of failure domains. Table 12.3. Failure domain support matrix Cloud provider Support for failure domains Provider nomenclature Amazon Web Services (AWS) X Availability Zone (AZ) Google Cloud Platform (GCP) X zone Microsoft Azure X Azure availability zone Nutanix X failure domain Red Hat OpenStack Platform (RHOSP) X OpenStack Nova availability zones and OpenStack Cinder availability zones VMware vSphere X failure domain mapped to a vSphere Zone [1] For more information, see "Regions and zones for a VMware vCenter". The failure domain configuration in the control plane machine set custom resource (CR) is platform-specific. For more information about failure domain parameters in the CR, see the sample failure domain configuration for your provider. Additional resources Sample Amazon Web Services failure domain configuration Sample Google Cloud Platform failure domain configuration Sample Microsoft Azure failure domain configuration Adding failure domains to an existing Nutanix cluster Sample Red Hat OpenStack Platform (RHOSP) failure domain configuration Sample VMware vSphere failure domain configuration Regions and zones for a VMware vCenter 12.6.1.2. Balancing control plane machines The control plane machine set balances control plane machines across the failure domains that are specified in the custom resource (CR). When possible, the control plane machine set uses each failure domain equally to ensure appropriate fault tolerance. If there are fewer failure domains than control plane machines, failure domains are selected for reuse alphabetically by name. For clusters with no failure domains specified, all control plane machines are placed within a single failure domain. Some changes to the failure domain configuration cause the control plane machine set to rebalance the control plane machines. For example, if you add failure domains to a cluster with fewer failure domains than control plane machines, the control plane machine set rebalances the machines across all available failure domains. 12.6.2. Recovery of failed control plane machines The Control Plane Machine Set Operator automates the recovery of control plane machines. When a control plane machine is deleted, the Operator creates a replacement with the configuration that is specified in the ControlPlaneMachineSet custom resource (CR). For clusters that use control plane machine sets, you can configure a machine health check. The machine health check deletes unhealthy control plane machines so that they are replaced. Important If you configure a MachineHealthCheck resource for the control plane, set the value of maxUnhealthy to 1 . This configuration ensures that the machine health check takes no action when multiple control plane machines appear to be unhealthy. Multiple unhealthy control plane machines can indicate that the etcd cluster is degraded or that a scaling operation to replace a failed machine is in progress. If the etcd cluster is degraded, manual intervention might be required. If a scaling operation is in progress, the machine health check should allow it to finish. Additional resources Deploying machine health checks 12.6.3. Quorum protection with machine lifecycle hooks For OpenShift Container Platform clusters that use the Machine API Operator, the etcd Operator uses lifecycle hooks for the machine deletion phase to implement a quorum protection mechanism. By using a preDrain lifecycle hook, the etcd Operator can control when the pods on a control plane machine are drained and removed. To protect etcd quorum, the etcd Operator prevents the removal of an etcd member until it migrates that member onto a new node within the cluster. This mechanism allows the etcd Operator precise control over the members of the etcd quorum and allows the Machine API Operator to safely create and remove control plane machines without specific operational knowledge of the etcd cluster. 12.6.3.1. Control plane deletion with quorum protection processing order When a control plane machine is replaced on a cluster that uses a control plane machine set, the cluster temporarily has four control plane machines. When the fourth control plane node joins the cluster, the etcd Operator starts a new etcd member on the replacement node. When the etcd Operator observes that the old control plane machine is marked for deletion, it stops the etcd member on the old node and promotes the replacement etcd member to join the quorum of the cluster. The control plane machine Deleting phase proceeds in the following order: A control plane machine is slated for deletion. The control plane machine enters the Deleting phase. To satisfy the preDrain lifecycle hook, the etcd Operator takes the following actions: The etcd Operator waits until a fourth control plane machine is added to the cluster as an etcd member. This new etcd member has a state of Running but not ready until it receives the full database update from the etcd leader. When the new etcd member receives the full database update, the etcd Operator promotes the new etcd member to a voting member and removes the old etcd member from the cluster. After this transition is complete, it is safe for the old etcd pod and its data to be removed, so the preDrain lifecycle hook is removed. The control plane machine status condition Drainable is set to True . The machine controller attempts to drain the node that is backed by the control plane machine. If draining fails, Drained is set to False and the machine controller attempts to drain the node again. If draining succeeds, Drained is set to True . The control plane machine status condition Drained is set to True . If no other Operators have added a preTerminate lifecycle hook, the control plane machine status condition Terminable is set to True . The machine controller removes the instance from the infrastructure provider. The machine controller deletes the Node object. YAML snippet demonstrating the etcd quorum protection preDrain lifecycle hook apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: ... spec: lifecycleHooks: preDrain: - name: EtcdQuorumOperator 1 owner: clusteroperator/etcd 2 ... 1 The name of the preDrain lifecycle hook. 2 The hook-implementing controller that manages the preDrain lifecycle hook. Additional resources Lifecycle hooks for the machine deletion phase 12.7. Troubleshooting the control plane machine set Use the information in this section to understand and recover from issues you might encounter. 12.7.1. Checking the control plane machine set custom resource state You can verify the existence and state of the ControlPlaneMachineSet custom resource (CR). Procedure Determine the state of the CR by running the following command: USD oc get controlplanemachineset.machine.openshift.io cluster \ --namespace openshift-machine-api A result of Active indicates that the ControlPlaneMachineSet CR exists and is activated. No administrator action is required. A result of Inactive indicates that a ControlPlaneMachineSet CR exists but is not activated. A result of NotFound indicates that there is no existing ControlPlaneMachineSet CR. steps To use the control plane machine set, you must ensure that a ControlPlaneMachineSet CR with the correct settings for your cluster exists. If your cluster has an existing CR, you must verify that the configuration in the CR is correct for your cluster. If your cluster does not have an existing CR, you must create one with the correct configuration for your cluster. Additional resources Activating the control plane machine set custom resource Creating a control plane machine set custom resource 12.7.2. Adding a missing Azure internal load balancer The internalLoadBalancer parameter is required in both the ControlPlaneMachineSet and control plane Machine custom resources (CRs) for Azure. If this parameter is not preconfigured on your cluster, you must add it to both CRs. For more information about where this parameter is located in the Azure provider specification, see the sample Azure provider specification. The placement in the control plane Machine CR is similar. Procedure List the control plane machines in your cluster by running the following command: USD oc get machines \ -l machine.openshift.io/cluster-api-machine-role==master \ -n openshift-machine-api For each control plane machine, edit the CR by running the following command: USD oc edit machine <control_plane_machine_name> Add the internalLoadBalancer parameter with the correct details for your cluster and save your changes. Edit your control plane machine set CR by running the following command: USD oc edit controlplanemachineset.machine.openshift.io cluster \ -n openshift-machine-api Add the internalLoadBalancer parameter with the correct details for your cluster and save your changes. steps For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration. For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually. Additional resources Sample Microsoft Azure provider specification 12.7.3. Recovering a degraded etcd Operator Certain situations can cause the etcd Operator to become degraded. For example, while performing remediation, the machine health check might delete a control plane machine that is hosting etcd. If the etcd member is not reachable at that time, the etcd Operator becomes degraded. When the etcd Operator is degraded, manual intervention is required to force the Operator to remove the failed member and restore the cluster state. Procedure List the control plane machines in your cluster by running the following command: USD oc get machines \ -l machine.openshift.io/cluster-api-machine-role==master \ -n openshift-machine-api \ -o wide Any of the following conditions might indicate a failed control plane machine: The STATE value is stopped . The PHASE value is Failed . The PHASE value is Deleting for more than ten minutes. Important Before continuing, ensure that your cluster has two healthy control plane machines. Performing the actions in this procedure on more than one control plane machine risks losing etcd quorum and can cause data loss. If you have lost the majority of your control plane hosts, leading to etcd quorum loss, then you must follow the disaster recovery procedure "Restoring to a cluster state" instead of this procedure. Edit the machine CR for the failed control plane machine by running the following command: USD oc edit machine <control_plane_machine_name> Remove the contents of the lifecycleHooks parameter from the failed control plane machine and save your changes. The etcd Operator removes the failed machine from the cluster and can then safely add new etcd members. Additional resources Restoring to a cluster state 12.7.4. Upgrading clusters that run on RHOSP For clusters that run on Red Hat OpenStack Platform (RHOSP) that were created with OpenShift Container Platform 4.13 or earlier, you might have to perform post-upgrade tasks before you can use control plane machine sets. 12.7.4.1. Configuring RHOSP clusters that have machines with root volume availability zones after an upgrade For some clusters that run on Red Hat OpenStack Platform (RHOSP) that you upgrade, you must manually update machine resources before you can use control plane machine sets if the following configurations are true: The upgraded cluster was created with OpenShift Container Platform 4.13 or earlier. The cluster infrastructure is installer-provisioned. Machines were distributed across multiple availability zones. Machines were configured to use root volumes for which block storage availability zones were not defined. To understand why this procedure is necessary, see Solution #7024383 . Procedure For all control plane machines, edit the provider spec for all control plane machines that match the environment. For example, to edit the machine master-0 , enter the following command: USD oc edit machine/<cluster_id>-master-0 -n openshift-machine-api where: <cluster_id> Specifies the ID of the upgraded cluster. In the provider spec, set the value of the property rootVolume.availabilityZone to the volume of the availability zone you want to use. An example RHOSP provider spec providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.14 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 rootVolume: availabilityZone: nova 1 diskSize: 30 sourceUUID: rhcos-4.12 volumeType: fast-0 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data 1 Set the zone name as this value. Note If you edited or recreated machine resources after your initial cluster deployment, you might have to adapt these steps for your configuration. In your RHOSP cluster, find the availability zone of the root volumes for your machines and use that as the value. Run the following command to retrieve information about the control plane machine set resource: USD oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api Run the following command to edit the resource: USD oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api For that resource, set the value of the spec.state property to Active to activate control plane machine sets for your cluster. Your control plane is ready to be managed by the Cluster Control Plane Machine Set Operator. 12.7.4.2. Configuring RHOSP clusters that have control plane machines with availability zones after an upgrade For some clusters that run on Red Hat OpenStack Platform (RHOSP) that you upgrade, you must manually update machine resources before you can use control plane machine sets if the following configurations are true: The upgraded cluster was created with OpenShift Container Platform 4.13 or earlier. The cluster infrastructure is installer-provisioned. Control plane machines were distributed across multiple compute availability zones. To understand why this procedure is necessary, see Solution #7013893 . Procedure For the master-1 and master-2 control plane machines, open the provider specs for editing. For example, to edit the first machine, enter the following command: USD oc edit machine/<cluster_id>-master-1 -n openshift-machine-api where: <cluster_id> Specifies the ID of the upgraded cluster. For the master-1 and master-2 control plane machines, edit the value of the serverGroupName property in their provider specs to match that of the machine master-0 . An example RHOSP provider spec providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.16 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master-az0 1 serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data 1 This value must match for machines master-0 , master-1 , and master-3 . Note If you edited or recreated machine resources after your initial cluster deployment, you might have to adapt these steps for your configuration. In your RHOSP cluster, find the server group that your control plane instances are in and use that as the value. Run the following command to retrieve information about the control plane machine set resource: USD oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api Run the following command to edit the resource: USD oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api For that resource, set the value of the spec.state property to Active to activate control plane machine sets for your cluster. Your control plane is ready to be managed by the Cluster Control Plane Machine Set Operator. 12.8. Disabling the control plane machine set The .spec.state field in an activated ControlPlaneMachineSet custom resource (CR) cannot be changed from Active to Inactive . To disable the control plane machine set, you must delete the CR so that it is removed from the cluster. When you delete the CR, the Control Plane Machine Set Operator performs cleanup operations and disables the control plane machine set. The Operator then removes the CR from the cluster and creates an inactive control plane machine set with default settings. 12.8.1. Deleting the control plane machine set To stop managing control plane machines with the control plane machine set on your cluster, you must delete the ControlPlaneMachineSet custom resource (CR). Procedure Delete the control plane machine set CR by running the following command: USD oc delete controlplanemachineset.machine.openshift.io cluster \ -n openshift-machine-api Verification Check the control plane machine set custom resource state. A result of Inactive indicates that the removal and replacement process is successful. A ControlPlaneMachineSet CR exists but is not activated. 12.8.2. Checking the control plane machine set custom resource state You can verify the existence and state of the ControlPlaneMachineSet custom resource (CR). Procedure Determine the state of the CR by running the following command: USD oc get controlplanemachineset.machine.openshift.io cluster \ --namespace openshift-machine-api A result of Active indicates that the ControlPlaneMachineSet CR exists and is activated. No administrator action is required. A result of Inactive indicates that a ControlPlaneMachineSet CR exists but is not activated. A result of NotFound indicates that there is no existing ControlPlaneMachineSet CR. 12.8.3. Re-enabling the control plane machine set To re-enable the control plane machine set, you must ensure that the configuration in the CR is correct for your cluster and activate it. Additional resources Activating the control plane machine set custom resource
[ "oc get machine -n openshift-machine-api -l machine.openshift.io/cluster-api-machine-role=master", "NAME PHASE TYPE REGION ZONE AGE <infrastructure_id>-master-0 Running m6i.xlarge us-west-1 us-west-1a 5h19m <infrastructure_id>-master-1 Running m6i.xlarge us-west-1 us-west-1b 5h19m <infrastructure_id>-master-2 Running m6i.xlarge us-west-1 us-west-1a 5h19m", "No resources found in openshift-machine-api namespace.", "oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api", "oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 1 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 2 strategy: type: RollingUpdate 3 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 4 <platform_failure_domains> 5 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> 6 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 7", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc create -f <control_plane_machine_set>.yaml", "oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api", "openstack compute service set <target_node_host_name> nova-compute --disable", "oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api", "oc delete machine -n openshift-machine-api <control_plane_machine_name> 1", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster 1 namespace: openshift-machine-api spec: replicas: 3 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 3 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 4 strategy: type: RollingUpdate 5 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 6 <platform_failure_domains> 7 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 8", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: ami: id: ami-<ami_id_string> 1 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: 2 encrypted: true iops: 0 kmsKey: arn: \"\" volumeSize: 120 volumeType: gp3 credentialsSecret: name: aws-cloud-credentials 3 deviceIndex: 0 iamInstanceProfile: id: <cluster_id>-master-profile 4 instanceType: m6i.xlarge 5 kind: AWSMachineProviderConfig 6 loadBalancers: 7 - name: <cluster_id>-int type: network - name: <cluster_id>-ext type: network metadata: creationTimestamp: null metadataServiceOptions: {} placement: 8 region: <region> 9 availabilityZone: \"\" 10 tenancy: 11 securityGroups: - filters: - name: tag:Name values: - <cluster_id>-master-sg 12 subnet: {} 13 userDataSecret: name: master-user-data 14", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: aws: - placement: availabilityZone: <aws_zone_a> 1 subnet: 2 filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_a> 3 type: Filters 4 - placement: availabilityZone: <aws_zone_b> 5 subnet: filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_b> 6 type: Filters platform: AWS 7", "providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "providerSpec: value: instanceType: <compatible_aws_instance_type> 1", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5", "providerSpec: value: metadataServiceOptions: authentication: Required 1", "providerSpec: placement: tenancy: dedicated", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials 1 namespace: openshift-machine-api diagnostics: {} image: 2 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 3 sku: \"\" version: \"\" internalLoadBalancer: <cluster_id>-internal 4 kind: AzureMachineProviderSpec 5 location: <region> 6 managedIdentity: <cluster_id>-identity metadata: creationTimestamp: null name: <cluster_id> networkResourceGroup: <cluster_id>-rg osDisk: 7 diskSettings: {} diskSizeGB: 1024 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <cluster_id> 8 resourceGroup: <cluster_id>-rg subnet: <cluster_id>-master-subnet 9 userDataSecret: name: master-user-data 10 vmSize: Standard_D8s_v3 vnet: <cluster_id>-vnet zone: \"1\" 11", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: azure: - zone: \"1\" 1 - zone: \"2\" - zone: \"3\" platform: Azure 2", "providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700", "providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1", "providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2", "\"storage\": { \"disks\": [ 1 { \"device\": \"/dev/disk/azure/scsi1/lun0\", 2 \"partitions\": [ 3 { \"label\": \"lun0p1\", 4 \"sizeMiB\": 1024, 5 \"startMiB\": 0 } ] } ], \"filesystems\": [ 6 { \"device\": \"/dev/disk/by-partlabel/lun0p1\", \"format\": \"xfs\", \"path\": \"/var/lib/lun0p1\" } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var/lib/lun0p1\\nWhat=/dev/disk/by-partlabel/lun0p1\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", 8 \"enabled\": true, \"name\": \"var-lib-lun0p1.mount\" } ] }", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt", "oc -n openshift-machine-api create secret generic <role>-user-data-x5 \\ 1 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt", "oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster", "apiVersion: machine.openshift.io/v1beta1 kind: ControlPlaneMachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code=\"BadRequest\" Message=\"Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>.\"", "providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: osDisk: # managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1", "oc get machine -n openshift-machine-api -l machine.openshift.io/cluster-api-machine-role=master", "providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get ControlPlaneMachineSet/cluster", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials 1 deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 2 labels: null sizeGb: 200 type: pd-ssd kind: GCPMachineProviderSpec 3 machineType: e2-standard-4 metadata: creationTimestamp: null metadataServiceOptions: {} networkInterfaces: - network: <cluster_id>-network subnetwork: <cluster_id>-master-subnet projectID: <project_name> 4 region: <region> 5 serviceAccounts: 6 - email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform shieldedInstanceConfig: {} tags: - <cluster_id>-master targetPools: - <cluster_id>-api userDataSecret: name: master-user-data 7 zone: \"\" 8", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: gcp: - zone: <gcp_zone_a> 1 - zone: <gcp_zone_b> 2 - zone: <gcp_zone_c> - zone: <gcp_zone_d> platform: GCP 3", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: disks: type: pd-ssd 1", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4", "gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 1 categories: 2 - key: <category_name> value: <category_value> cluster: 3 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials 4 image: 5 name: <cluster_id>-rhcos type: name kind: NutanixMachineProviderConfig 6 memorySize: 16Gi 7 metadata: creationTimestamp: null project: 8 type: name name: <project_name> subnets: 9 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 10 userDataSecret: name: master-user-data 11 vcpuSockets: 8 12 vcpusPerSocket: 1 13", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials 1 namespace: openshift-machine-api flavor: m1.xlarge 2 image: ocp1-2g2xs-rhcos kind: OpenstackProviderSpec 3 metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: ocp1-2g2xs-nodes tags: openshiftClusterID=ocp1-2g2xs securityGroups: - filter: {} name: ocp1-2g2xs-master 4 serverGroupName: ocp1-2g2xs-master serverMetadata: Name: ocp1-2g2xs-master openshiftClusterID: ocp1-2g2xs tags: - openshiftClusterID=ocp1-2g2xs trunk: true userDataSecret: name: master-user-data", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: platform: OpenStack openstack: - availabilityZone: nova-az0 rootVolume: availabilityZone: cinder-az0 - availabilityZone: nova-az1 rootVolume: availabilityZone: cinder-az1 - availabilityZone: nova-az2 rootVolume: availabilityZone: cinder-az2", "providerSpec: value: flavor: m1.xlarge 1", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 2 kind: VSphereMachineProviderSpec 3 memoryMiB: 16384 4 metadata: creationTimestamp: null network: 5 devices: - networkName: <vm_network_name> numCPUs: 4 6 numCoresPerSocket: 4 7 snapshot: \"\" template: <vm_template_name> 8 userDataSecret: name: master-user-data 9 workspace: 10 datacenter: <vcenter_data_center_name> 11 datastore: <vcenter_datastore_name> 12 folder: <path_to_vcenter_vm_folder> 13 resourcePool: <vsphere_resource_pool> 14 server: <vcenter_server_ip> 15", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: 1 platform: VSphere vsphere: 2 - name: <failure_domain_name1> - name: <failure_domain_name2>", "oc get infrastructure cluster -o=jsonpath={.spec.platformSpec.vsphere.failureDomains[0].name}", "https://vcenter.example.com/ui/app/tags/tag/urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL/permissions", "urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: tagIDs: 1 - <tag_id_value> 2", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: - name: EtcdQuorumOperator 1 owner: clusteroperator/etcd 2", "oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api", "oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api", "oc edit machine <control_plane_machine_name>", "oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api", "oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api -o wide", "oc edit machine <control_plane_machine_name>", "oc edit machine/<cluster_id>-master-0 -n openshift-machine-api", "providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.14 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 rootVolume: availabilityZone: nova 1 diskSize: 30 sourceUUID: rhcos-4.12 volumeType: fast-0 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data", "oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api", "oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api", "oc edit machine/<cluster_id>-master-1 -n openshift-machine-api", "providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.16 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master-az0 1 serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data", "oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api", "oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api", "oc delete controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api", "oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/machine_management/managing-control-plane-machines
Chapter 3. Technology Previews
Chapter 3. Technology Previews Important Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about support scope, see Technology Preview Features Support Scope . 3.1. Cluster rebalancing with Cruise Control Cruise Control stays in Technology Preview in this release, with some new enhancements . You can now deploy Cruise Control to your AMQ Streams cluster and use it to rebalance the Kafka cluster using optimization goals - predefined constraints on CPU, disk, and network load. In a balanced Kafka cluster, the workload is more evenly distributed across the broker pods. Cruise Control is configured and deployed as part of a Kafka resource. Example YAML configuration files for Cruise Control are provided in examples/cruise-control/ . When Cruise Control is deployed, you can use KafkaRebalance custom resources to: Generate optimization proposals from multiple optimization goals Rebalance a Kafka cluster based on an optimization proposal Other Cruise Control features are not currently supported, including anomaly detection, notifications, write-your-own goals, and changing the topic replication factor. See Cruise Control for cluster rebalancing . 3.1.1. Enhancements to the Technology Preview The following enhancements have been added to the initial Technology Preview of cluster rebalancing with Cruise Control. Rebalance performance tuning Five new performance tuning options allow you to control how cluster rebalances are executed and reduce their performance impact. For each batch of partition reassignment commands that comprise a cluster rebalance, you can configure the following: Maximum concurrent partition movements per broker (the default is 5) Maximum concurrent intra-broker partition movements (the default is 2) Maximum concurrent leader movements (the default is 1000) Bandwidth in bytes-per-second to assign to partition reassignment (the default is no limit) Replica movement strategy (the default is BaseReplicaMovementStrategy ) Previously, AMQ Streams inherited these options from Cruise Control, so their default values could not be adjusted. You can set performance tuning options for the Cruise Control server, individual rebalances, or both. For the Cruise Control server, set options in the Kafka custom resource, under spec.cruiseControl.config . For a cluster rebalance, set options in the spec property of the KafkaRebalance custom resource. See Rebalance performance tuning overview . Exclude topics from optimization proposals You can now exclude one or more topics from an optimization proposal. Those topics are not included in the calculation of partition replica and partition leadership movements for the cluster rebalance. To exclude topics, specify a regular expression matching the topic names in the KafkaRebalance custom resource, in the spec.excludedTopicsRegex property. When viewing a generated optimization proposal, the excludedTopics property shows you the topics that were excluded. See Rebalance performance tuning overview . CPU capacity goal support Rebalancing a Kafka cluster based on CPU capacity is now supported through the following configurations: The CpuCapacityGoal optimization goal The cpuUtilization capacity limit The CPU capacity goal prevents the CPU utilization of each broker from exceeding a maximum percentage threshold. The default threshold is set as 100% of CPU capacity per broker. To reduce the percentage threshold, configure the cpuUtilization capacity limit in the Kafka custom resource. Capacity limits apply to all brokers. CPU capacity is preset as a hard goal in Cruise Control. Therefore, it is inherited from Cruise Control as a hard goal, unless you override the preset hard goals in the hard.goals property in Kafka.spec.cruiseControl.config . Example configuration for CPU capacity goal in a KafkaRebalance custom resource apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: amq-streams-cluster spec: goals: - CpuCapacityGoal - DiskCapacityGoal #... Example configuration for percentage CPU utilization capacity limit apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: amq-streams-cluster spec: # ... cruiseControl: # ... brokerCapacity: cpuUtilization: 85 disk: 100Gi # ... See Optimization goals overview and Capacity configuration .
[ "apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaRebalance metadata: name: my-rebalance labels: strimzi.io/cluster: amq-streams-cluster spec: goals: - CpuCapacityGoal - DiskCapacityGoal #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: amq-streams-cluster spec: # cruiseControl: # brokerCapacity: cpuUtilization: 85 disk: 100Gi #" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_amq_streams_1.6_on_openshift/tech-preview-str
Chapter 6. AWS Lambda Sink
Chapter 6. AWS Lambda Sink Send a payload to an AWS Lambda function 6.1. Configuration Options The following table summarizes the configuration options available for the aws-lambda-sink Kamelet: Property Name Description Type Default Example accessKey * Access Key The access key obtained from AWS string function * Function Name The Lambda Function name string region * AWS Region The AWS region to connect to string "eu-west-1" secretKey * Secret Key The secret key obtained from AWS string Note Fields marked with an asterisk (*) are mandatory. 6.2. Dependencies At runtime, the aws-lambda-sink Kamelet relies upon the presence of the following dependencies: camel:kamelet camel:aws2-lambda 6.3. Usage This section describes how you can use the aws-lambda-sink . 6.3.1. Knative Sink You can use the aws-lambda-sink Kamelet as a Knative sink by binding it to a Knative object. aws-lambda-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-lambda-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-lambda-sink properties: accessKey: "The Access Key" function: "The Function Name" region: "eu-west-1" secretKey: "The Secret Key" 6.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 6.3.1.2. Procedure for using the cluster CLI Save the aws-lambda-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-lambda-sink-binding.yaml 6.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel aws-lambda-sink -p "sink.accessKey=The Access Key" -p "sink.function=The Function Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 6.3.2. Kafka Sink You can use the aws-lambda-sink Kamelet as a Kafka sink by binding it to a Kafka topic. aws-lambda-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-lambda-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-lambda-sink properties: accessKey: "The Access Key" function: "The Function Name" region: "eu-west-1" secretKey: "The Secret Key" 6.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 6.3.2.2. Procedure for using the cluster CLI Save the aws-lambda-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f aws-lambda-sink-binding.yaml 6.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-lambda-sink -p "sink.accessKey=The Access Key" -p "sink.function=The Function Name" -p "sink.region=eu-west-1" -p "sink.secretKey=The Secret Key" This command creates the KameletBinding in the current namespace on the cluster. 6.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-lambda-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-lambda-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-lambda-sink properties: accessKey: \"The Access Key\" function: \"The Function Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"", "apply -f aws-lambda-sink-binding.yaml", "kamel bind channel:mychannel aws-lambda-sink -p \"sink.accessKey=The Access Key\" -p \"sink.function=The Function Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-lambda-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-lambda-sink properties: accessKey: \"The Access Key\" function: \"The Function Name\" region: \"eu-west-1\" secretKey: \"The Secret Key\"", "apply -f aws-lambda-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-lambda-sink -p \"sink.accessKey=The Access Key\" -p \"sink.function=The Function Name\" -p \"sink.region=eu-west-1\" -p \"sink.secretKey=The Secret Key\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/aws-lambda-sink
7.70. gnome-screensaver
7.70. gnome-screensaver 7.70.1. RHBA-2013:0390 - gnome-screensaver bug fix update Updated gnome-screensaver packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The gnome-screensaver packages contain the GNOME project's official screen saver program. The screen saver is designed for improved integration with the GNOME desktop, including themeability, language support, and Human Interface Guidelines (HIG) compliance. It also provides screen-locking and fast user-switching from a locked screen. Bug Fixes BZ# 648869 Previously, NVIDIA hardware did not support the X Resize and Rotate Extension (xRandR) gamma changes. Consequently, the fade-out function did not work on the NVIDIA hardware. With this update, xRandR gamma support detection code fails on NVIDIA cards, and the XF86VM gamma fade extension is automatically used as a fallback so the fade-out function works as expected. BZ# 744763 Previously, the mouse cursor could be moved to a non-primary monitor so the unlock dialog box did not appear when the user moved the mouse. This bug has been fixed and the mouse cursor can no longer be moved to a non-primary monitor. As a result, the unlock dialog box comes up anytime the user moves the mouse. BZ#752230 Previously, the shake animation of the unlock dialog box could appear to be very slow. This was because the background was updated every time the window's size allocation changed, and the widget's size allocation consequently changed every frame of the shake animation. The underlying source code has been modified to ensure a reasonable speed of the shake animation. BZ#759395 When a Mandatory profile was enabled, the "Lock screen when screen saver is active" option in the Screensaver Preferences window was not disabled. This bug could expose the users to a security risk. With this update, the lock-screen option is disabled as expected in the described scenario. BZ#824752 When using dual screens, moving the mouse did not unlock gnome-screensaver after the initial timeout. The users had to press a key to unlock the screen. The underlying source code has been modified and the user can now unlock gnome-screensaver by moving the mouse. All users of gnome-screensaver are advised to upgrade to these updated packages, which fix these bugs. 7.70.2. RHBA-2013:1178 - gnome-screensaver bug fix update Updated gnome-screensaver packages that fix one bug are now available for Red Hat Enterprise Linux 6.. The gnome-screensaver packages contain the GNOME project's official screen saver program. It is designed for improved integration with the GNOME desktop, including themeability, language support, and Human Interface Guidelines (HIG) compliance. It also provides screen-locking and fast user-switching from a locked screen. Bug Fix BZ# 994868 Previously, when using virt-manager, virt-viewer, and spice-xpi applications, users were unable to enter the gnome-screensaver password after the screen saver started. This happened only when the VM system used the Compiz composting window manager. After users released the mouse cursor, then pressed a key to enter a password, the dialog did not accept any input. This happened due to incorrect assignment of window focus to applications that did not drop their keyboard grab. With this update, window focus is now properly assigned to the correct place, and attempts to enter the gnome-screensaver password no longer fail in the described scenario. Users of gnome-screensaver are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/gnome-screensaver
Getting Started with Red Hat build of Apache Camel for Spring Boot
Getting Started with Red Hat build of Apache Camel for Spring Boot Red Hat build of Apache Camel 4.8
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/getting_started_with_red_hat_build_of_apache_camel_for_spring_boot/index
Clusters
Clusters Red Hat Advanced Cluster Management for Kubernetes 2.11 Cluster management
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/index
4.5. Creating the Replica: Introduction
4.5. Creating the Replica: Introduction The ipa-replica-install utility is used to install a new replica from an existing IdM server. Install Identity Management replicas one at a time. The installation of multiple replicas at the same time is not supported. Note This chapter describes the simplified replica installation introduced in Red Hat Enterprise Linux 7.3. The procedures require domain level 1 (see Chapter 7, Displaying and Raising the Domain Level ). For documentation on installing a replica at domain level 0, see Appendix D, Managing Replicas at Domain Level 0 . You can install a new replica: on an existing IdM client by promoting the client to a replica: see the section called "Promoting an Existing Client to a Replica" on a machine that has not yet been enrolled in the IdM domain: see the section called "Installing a Replica on a Machine That Is Not a Client" In both of these situations, you can customize your replica by adding options to ipa-replica-install : see the section called "Using ipa-replica-install to Configure the Replica for Your Use Case" . To install the replica as hidden, pass the --hidden-replica parameter to ipa-replica-install . For further details about hidden replicas, see Section 4.2.3, "The Hidden Replica Mode" . Important If the IdM server you are replicating has a trust with Active Directory, set up the replica as a trust agent after running ipa-replica-install . See Trust Controllers and Trust Agents in the Windows Integration Guide . Promoting an Existing Client to a Replica To install the replica on an existing client, you must make sure the client is authorized to be promoted. To achieve this, choose one of the following: Provide a privileged user's credentials The default privileged user is admin . There are multiple ways to provide the user's credentials. You can: let IdM prompt you to get the credentials interactively Note This is the default way to provide the privileged user's credentials. If no credentials are available when ipa-replica-install runs, the installation automatically prompts you. log in as the user before running ipa-replica-install on the client: add the user's principal name and password to ipa-replica-install directly: Add the client to the ipaservers host group Membership in ipaservers grants the machine elevated privileges analogous to a privileged user's credentials. You will not be required to provide the user's credentials. Example: Section 4.5.1, "Promoting a Client to a Replica Using a Host Keytab" Installing a Replica on a Machine That Is Not a Client When run on a machine that has not yet been enrolled in the IdM domain, ipa-replica-install first enrolls the machine as a client and then installs the replica components. To install a replica in this situation, choose one of the following: Provide a privileged user's credentials The default privileged user is admin . To provide the credentials, add the principal name and password to ipa-replica-install directly: Provide a random password for the client You must generate the random password on a server before installing the replica. You will not be required to provide the user's credentials during the installation. Example: Section 4.5.2, "Installing a Replica Using a Random Password" By default, the replica is installed against the first IdM server discovered by the client installer. To install the replica against a particular server, add the following options to ipa-replica-install : --server for the server's fully qualified domain name (FQDN) --domain for the IdM DNS domain Using ipa-replica-install to Configure the Replica for Your Use Case When run without any options, ipa-replica-install only sets up basic server services. To install additional services, such as DNS or a certificate authority (CA), add options to ipa-replica-install . Warning Red Hat strongly recommends to keep the CA services installed on more than one server. For information on installing a replica of the initial server including the CA services, see Section 4.5.4, "Installing a Replica with a CA" . If you install the CA on only one server, you risk losing the CA configuration without a chance of recovery if the CA server fails. See Section B.2.6, "Recovering a Lost CA Server" for details. For example scenarios of installing a replica with the most notable options, see: Section 4.5.3, "Installing a Replica with DNS" , using --setup-dns and --forwarder Section 4.5.4, "Installing a Replica with a CA" , using --setup-ca Section 4.5.5, "Installing a Replica from a Server without a CA" , using --dirsrv-cert-file , --dirsrv-pin , --http-cert-file , and --http-pin You can also use the --dirsrv-config-file parameter to change default Directory Server settings, by specifying the path to a LDIF file with custom values. For more information, see IdM now supports setting individual Directory Server options during server or replica installation in the Release Notes for Red Hat Enterprise Linux 7.3 . For a complete list of the options used to configure the replica, see the ipa-replica-install (1) man page. 4.5.1. Promoting a Client to a Replica Using a Host Keytab In this procedure, an existing IdM client is promoted to a replica using its own host keytab to authorize the promotion. The procedure does not require you to provide the administrator or Directory Manager (DM) credentials. It is therefore more secure because no sensitive information is exposed on the command line. On an existing server: Log in as the administrator. Add the client machine to the ipaservers host group. Membership in ipaservers grants the machine elevated privileges analogous to the administrator's credentials. On the client, run the ipa-replica-install utility. Optionally, if the IdM server you are replicating has a trust with Active Directory, set up the replica as a trust agent or trust controller. For details, see Trust Controllers and Trust Agents in the Windows Integration Guide . 4.5.2. Installing a Replica Using a Random Password In this procedure, a replica is installed from scratch on a machine that is not yet an IdM client. To authorize the enrollment, a client-specific random password valid for one client enrollment only is used. The procedure does not require you to provide the administrator or Directory Manager (DM) credentials. It is therefore more secure because no sensitive information is exposed on the command line. On an existing server: Log in as the administrator. Add the new machine as an IdM host. Use the --random option with the ipa host-add command to generate a random one-time password to be used for the replica installation. The generated password will become invalid after you use it to enroll the machine into the IdM domain. It will be replaced with a proper host keytab after the enrollment is finished. Add the machine to the ipaservers host group. Membership in ipaservers grants the machine elevated privileges required to set up the necessary server services. On the machine where you want to install the replica, run ipa-replica-install , and provide the random password using the --password option. Enclose the password in single quotes (') because it often contains special characters: Optionally, if the IdM server you are replicating has a trust with Active Directory, set up the replica as a trust agent or trust controller. For details, see Trust Controllers and Trust Agents in the Windows Integration Guide . 4.5.3. Installing a Replica with DNS This procedure works for installing a replica on a client as well as on a machine that is not part of the IdM domain yet. See Section 4.5, "Creating the Replica: Introduction" for details. Run ipa-replica-install with these options: --setup-dns to create a DNS zone if it does not exist already and configure the replica as the DNS server --forwarder to specify a forwarder, or --no-forwarder if you do not want to use any forwarders To specify multiple forwarders for failover reasons, use --forwarder multiple times. For example: Note The ipa-replica-install utility accepts a number of other options related to DNS settings, such as --no-reverse or --no-host-dns . For more information about them, see the ipa-replica-install (1) man page. If the initial server was created with DNS enabled, the replica is automatically created with the proper DNS entries. The entries ensure that IdM clients will be able to discover the new server. If the initial server did not have DNS enabled, add the DNS records manually. The following DNS records are necessary for the domain services: _ldap._tcp _kerberos._tcp _kerberos._udp _kerberos-master._tcp _kerberos-master._udp _ntp._udp _kpasswd._tcp _kpasswd._udp This example shows how to verify that the entries are present: Set the appropriate values for the DOMAIN and NAMESERVER variables: Use the following command to check for the DNS entries: Add DNS delegation from the parent domain to the IdM DNS domain. For example, if the IdM DNS domain is ipa.example.com , add a name server (NS) record to the example.com parent domain. Important This step must be repeated each time an IdM DNS server is installed. Optional, but recommended. Manually add other DNS servers as backup servers in case the replica becomes unavailable. See Section 33.11.1, "Setting up Additional Name Servers" . This is recommended especially for situations when the new replica is your first DNS server in the IdM domain. Optionally, if the IdM server you are replicating has a trust with Active Directory, set up the replica as a trust agent or trust controller. For details, see Trust Controllers and Trust Agents in the Windows Integration Guide . 4.5.4. Installing a Replica with a CA This procedure works for installing a replica on a client as well as on a machine that is not part of the IdM domain yet. See Section 4.5, "Creating the Replica: Introduction" for details. Run ipa-replica-install with the --setup-ca option. The --setup-ca option copies the CA configuration from the initial server's configuration, regardless of whether the IdM CA on the server is a root CA or whether it is subordinated to an external CA. Note For details on the supported CA configurations, see Section 2.3.2, "Determining What CA Configuration to Use" . Optionally, if the IdM server you are replicating has a trust with Active Directory, set up the replica as a trust agent or trust controller. For details, see Trust Controllers and Trust Agents in the Windows Integration Guide . 4.5.5. Installing a Replica from a Server without a CA This procedure works for installing a replica on a client as well as on a machine that is not part of the IdM domain yet. See Section 4.5, "Creating the Replica: Introduction" for details. Important You cannot install a server or replica using self-signed third-party server certificates. Run ipa-replica-install , and provide the required certificate files by adding these options: --dirsrv-cert-file --dirsrv-pin --http-cert-file --http-pin For details about the files that are provided using these options, see Section 2.3.6, "Installing Without a CA" . For example: Note Do not add the --ca-cert-file option. The ipa-replica-install utility takes this part of the certificate information automatically from the master server. Optionally, if the IdM server you are replicating has a trust with Active Directory, set up the replica as a trust agent or trust controller. For details, see Trust Controllers and Trust Agents in the Windows Integration Guide .
[ "kinit admin", "ipa-replica-install --principal admin --admin-password admin_password", "ipa-replica-install --principal admin --admin-password admin_password", "kinit admin", "ipa hostgroup-add-member ipaservers --hosts client.example.com Host-group: ipaservers Description: IPA server hosts Member hosts: server.example.com, client.example.com ------------------------- Number of members added 1 -------------------------", "ipa-replica-install", "kinit admin", "ipa host-add client.example.com --random -------------------------------------------------- Added host \"client.example.com\" -------------------------------------------------- Host name: client.example.com Random password: W5YpARl=7M.n Password: True Keytab: False Managed by: server.example.com", "ipa hostgroup-add-member ipaservers --hosts client.example.com Host-group: ipaservers Description: IPA server hosts Member hosts: server.example.com, client.example.com ------------------------- Number of members added 1 -------------------------", "ipa-replica-install --password ' W5YpARl=7M.n '", "ipa-replica-install --setup-dns --forwarder 192.0.2.1", "DOMAIN= example.com NAMESERVER= replica", "for i in _ldap._tcp _kerberos._tcp _kerberos._udp _kerberos-master._tcp _kerberos-master._udp _ntp._udp ; do dig @USD{NAMESERVER} USD{i}.USD{DOMAIN} srv +nocmd +noquestion +nocomments +nostats +noaa +noadditional +noauthority done | egrep \"^_\" _ldap._tcp.example.com. 86400 IN SRV 0 100 389 server1.example.com. _ldap._tcp.example.com. 86400 IN SRV 0 100 389 server2.example.com. _kerberos._tcp.example.com. 86400 IN SRV 0 100 88 server1.example.com.", "ipa-replica-install --setup-ca", "ipa-replica-install --dirsrv-cert-file /tmp/server.crt --dirsrv-cert-file /tmp/server.key --dirsrv-pin secret --http-cert-file /tmp/server.crt --http-cert-file /tmp/server.key --http-pin secret" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/creating-the-replica
7.263. vgabios
7.263. vgabios 7.263.1. RHBA-2013:0487 - vgabios bug fix update An updated vgabios package that fixes one bug is now available for Red Hat Enterprise Linux 6. The vgabios package provides a GNU Lesser General Public License (LPGL) implementation of a BIOS for video cards. The vgabios package contains BIOS images that are intended to be used in the Kernel Virtual Machine (KVM). Bug Fix BZ# 840087 Previously, an attempt to boot a Red Hat Enterprise Virtualization Hypervisor ISO in a virtual machine was unsuccessful. The boot menu appeared but then stopped responding. The underlying source code has been modified and the virtual machine now works as expected in the described scenario. All users of vgabios are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/vgabios
Chapter 4. Mirroring images for a disconnected installation using the oc-mirror plugin
Chapter 4. Mirroring images for a disconnected installation using the oc-mirror plugin Running your cluster in a restricted network without direct internet connectivity is possible by installing the cluster from a mirrored set of OpenShift Container Platform container images in a private registry. This registry must be running at all times as long as the cluster is running. See the Prerequisites section for more information. You can use the oc-mirror OpenShift CLI ( oc ) plugin to mirror images to a mirror registry in your fully or partially disconnected environments. You must run oc-mirror from a system with internet connectivity in order to download the required images from the official Red Hat registries. 4.1. About the oc-mirror plugin You can use the oc-mirror OpenShift CLI ( oc ) plugin to mirror all required OpenShift Container Platform content and other images to your mirror registry by using a single tool. It provides the following features: Provides a centralized method to mirror OpenShift Container Platform releases, Operators, helm charts, and other images. Maintains update paths for OpenShift Container Platform and Operators. Uses a declarative image set configuration file to include only the OpenShift Container Platform releases, Operators, and images that your cluster needs. Performs incremental mirroring, which reduces the size of future image sets. Prunes images from the target mirror registry that were excluded from the image set configuration since the execution. Optionally generates supporting artifacts for OpenShift Update Service (OSUS) usage. When using the oc-mirror plugin, you specify which content to mirror in an image set configuration file. In this YAML file, you can fine-tune the configuration to only include the OpenShift Container Platform releases and Operators that your cluster needs. This reduces the amount of data that you need to download and transfer. The oc-mirror plugin can also mirror arbitrary helm charts and additional container images to assist users in seamlessly synchronizing their workloads onto mirror registries. The first time you run the oc-mirror plugin, it populates your mirror registry with the required content to perform your disconnected cluster installation or update. In order for your disconnected cluster to continue receiving updates, you must keep your mirror registry updated. To update your mirror registry, you run the oc-mirror plugin using the same configuration as the first time you ran it. The oc-mirror plugin references the metadata from the storage backend and only downloads what has been released since the last time you ran the tool. This provides update paths for OpenShift Container Platform and Operators and performs dependency resolution as required. 4.1.1. High level workflow The following steps outline the high-level workflow on how to use the oc-mirror plugin to mirror images to a mirror registry: Create an image set configuration file. Mirror the image set to the target mirror registry by using one of the following methods: Mirror an image set directly to the target mirror registry. Mirror an image set to disk, transfer the image set to the target environment, then upload the image set to the target mirror registry. Configure your cluster to use the resources generated by the oc-mirror plugin. Repeat these steps to update your target mirror registry as necessary. Important When using the oc-mirror CLI plugin to populate a mirror registry, any further updates to the target mirror registry must be made by using the oc-mirror plugin. 4.2. oc-mirror plugin compatibility and support The oc-mirror plugin supports mirroring OpenShift Container Platform payload images and Operator catalogs for OpenShift Container Platform versions 4.12 and later. Note On aarch64 , ppc64le , and s390x architectures the oc-mirror plugin is only supported for OpenShift Container Platform versions 4.14 and later. Use the latest available version of the oc-mirror plugin regardless of which versions of OpenShift Container Platform you need to mirror. Additional resources For information on updating oc-mirror, see Viewing the image pull source . 4.3. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry that supports Docker v2-2 , such as Red Hat Quay. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , which is a small-scale container registry included with OpenShift Container Platform subscriptions. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. Additional resources For information about viewing the CRI-O logs to view the image source, see Viewing the image pull source . 4.4. Prerequisites You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as Red Hat Quay. Note If you use Red Hat Quay, you must use version 3.6 or later with the oc-mirror plugin. If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Red Hat Quay Operator . If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat Support. If you do not already have an existing solution for a container image registry, subscribers of OpenShift Container Platform are provided a mirror registry for Red Hat OpenShift . The mirror registry for Red Hat OpenShift is included with your subscription and is a small-scale container registry that can be used to mirror the required container images of OpenShift Container Platform in disconnected installations. 4.5. Preparing your mirror hosts Before you can use the oc-mirror plugin to mirror images, you must install the plugin and create a container image registry credentials file to allow the mirroring from Red Hat to your mirror. 4.5.1. Installing the oc-mirror OpenShift CLI plugin Install the oc-mirror OpenShift CLI plugin to manage image sets in disconnected environments. Prerequisites You have installed the OpenShift CLI ( oc ). If you are mirroring image sets in a fully disconnected environment, ensure the following: You have installed the oc-mirror plugin on the host that has internet access. The host in the disconnected environment has access to the target mirror registry. You have set the umask parameter to 0022 on the operating system that uses oc-mirror. You have installed the correct binary for the RHEL version that you are using. Procedure Download the oc-mirror CLI plugin. Navigate to the Downloads page of the OpenShift Cluster Manager . Under the OpenShift disconnected installation tools section, click Download for OpenShift Client (oc) mirror plugin and save the file. Extract the archive: USD tar xvzf oc-mirror.tar.gz If necessary, update the plugin file to be executable: USD chmod +x oc-mirror Note Do not rename the oc-mirror file. Install the oc-mirror CLI plugin by placing the file in your PATH , for example, /usr/local/bin : USD sudo mv oc-mirror /usr/local/bin/. Verification Verify that the plugin for oc-mirror v1 is successfully installed by running the following command: USD oc mirror help Additional resources Installing and using CLI plugins 4.5.2. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that enables you to mirror images from Red Hat to your mirror. Warning Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry. Warning This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret. Prerequisites You configured a mirror registry to use in your disconnected environment. You identified an image repository location on your mirror registry to mirror images into. You provisioned a mirror registry account that allows images to be uploaded to that image repository. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from Red Hat OpenShift Cluster Manager . Make a copy of your pull secret in JSON format: USD cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Save the file as either ~/.docker/config.json or USDXDG_RUNTIME_DIR/containers/auth.json : If the .docker or USDXDG_RUNTIME_DIR/containers directories do not exist, create one by entering the following command: USD mkdir -p <directory_name> Where <directory_name> is either ~/.docker or USDXDG_RUNTIME_DIR/containers . Copy the pull secret to the appropriate directory by entering the following command: USD cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file> Where <directory_name> is either ~/.docker or USDXDG_RUNTIME_DIR/containers , and <auth_file> is either config.json or auth.json . Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Edit the JSON file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 Specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 Specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 4.6. Creating the image set configuration Before you can use the oc-mirror plugin to mirror image sets, you must create an image set configuration file. This image set configuration file defines which OpenShift Container Platform releases, Operators, and other images to mirror, along with other configuration settings for the oc-mirror plugin. You must specify a storage backend in the image set configuration file. This storage backend can be a local directory or a registry that supports Docker v2-2 . The oc-mirror plugin stores metadata in this storage backend during image set creation. Important Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry. Prerequisites You have created a container image registry credentials file. For instructions, see "Configuring credentials that allow images to be mirrored". Procedure Use the oc mirror init command to create a template for the image set configuration and save it to a file called imageset-config.yaml : USD oc mirror init --registry <storage_backend> > imageset-config.yaml 1 1 Specifies the location of your storage backend, such as example.com/mirror/oc-mirror-metadata . Edit the file and adjust the settings as necessary: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.16 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 6 packages: - name: serverless-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 9 helm: {} 1 Add archiveSize to set the maximum size, in GiB, of each file within the image set. 2 Set the back-end location to save the image set metadata to. This location can be a registry or local directory. It is required to specify storageConfig values. 3 Set the registry URL for the storage backend. 4 Set the channel to retrieve the OpenShift Container Platform images from. 5 Add graph: true to build and push the graph-data image to the mirror registry. The graph-data image is required to create OpenShift Update Service (OSUS). The graph: true field also generates the UpdateService custom resource manifest. The oc command-line interface (CLI) can use the UpdateService custom resource manifest to create OSUS. For more information, see About the OpenShift Update Service . 6 Set the Operator catalog to retrieve the OpenShift Container Platform images from. 7 Specify only certain Operator packages to include in the image set. Remove this field to retrieve all packages in the catalog. 8 Specify only certain channels of the Operator packages to include in the image set. You must always include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command: oc mirror list operators --catalog=<catalog_name> --package=<package_name> . 9 Specify any additional images to include in image set. Note The graph: true field also mirrors the ubi-micro image along with other mirrored images. When upgrading OpenShift Container Platform Extended Update Support (EUS) versions, an intermediate version might be required between the current and target versions. For example, if the current version is 4.14 and target version is 4.16 , you might need to include a version such as 4.15.8 in the ImageSetConfiguration when using the oc-mirror plugin v1. The oc-mirror plugin v1 might not always detect this automatically, so check the Cincinnati graph web page to confirm any required intermediate versions and add them manually to your configuration. See "Image set configuration parameters" for the full list of parameters and "Image set configuration examples" for various mirroring use cases. Save the updated file. This image set configuration file is required by the oc mirror command when mirroring content. Additional resources Image set configuration parameters Image set configuration examples Using the OpenShift Update Service in a disconnected environment 4.7. Mirroring an image set to a mirror registry You can use the oc-mirror CLI plugin to mirror images to a mirror registry in a partially disconnected environment or in a fully disconnected environment . These procedures assume that you already have your mirror registry set up. 4.7.1. Mirroring an image set in a partially disconnected environment In a partially disconnected environment, you can mirror an image set directly to the target mirror registry. 4.7.1.1. Mirroring from mirror to mirror You can use the oc-mirror plugin to mirror an image set directly to a target mirror registry that is accessible during image set creation. You are required to specify a storage backend in the image set configuration file. This storage backend can be a local directory or a Docker v2 registry. The oc-mirror plugin stores metadata in this storage backend during image set creation. Important Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry. Prerequisites You have access to the internet to get the necessary container images. You have installed the OpenShift CLI ( oc ). You have installed the oc-mirror CLI plugin. You have created the image set configuration file. Procedure Run the oc mirror command to mirror the images from the specified image set configuration to a specified registry: USD oc mirror --config=./<imageset-config.yaml> \ 1 docker://registry.example:5000 2 1 Specify the image set configuration file that you created. For example, imageset-config.yaml . 2 Specify the registry to mirror the image set file to. The registry must start with docker:// . If you specify a top-level namespace for the mirror registry, you must also use this same namespace on subsequent executions. Verification Navigate into the oc-mirror-workspace/ directory that was generated. Navigate into the results directory, for example, results-1639608409/ . Verify that YAML files are present for the ImageContentSourcePolicy and CatalogSource resources. Note The repositoryDigestMirrors section of the ImageContentSourcePolicy YAML file is used for the install-config.yaml file during installation. steps Configure your cluster to use the resources generated by oc-mirror. Troubleshooting Unable to retrieve source image . 4.7.2. Mirroring an image set in a fully disconnected environment To mirror an image set in a fully disconnected environment, you must first mirror the image set to disk , then mirror the image set file on disk to a mirror . 4.7.2.1. Mirroring from mirror to disk You can use the oc-mirror plugin to generate an image set and save the contents to disk. The generated image set can then be transferred to the disconnected environment and mirrored to the target registry. Important Depending on the configuration specified in the image set configuration file, using oc-mirror to mirror images might download several hundreds of gigabytes of data to disk. The initial image set download when you populate the mirror registry is often the largest. Because you only download the images that changed since the last time you ran the command, when you run the oc-mirror plugin again, the generated image set is often smaller. You are required to specify a storage backend in the image set configuration file. This storage backend can be a local directory or a docker v2 registry. The oc-mirror plugin stores metadata in this storage backend during image set creation. Important Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry. Prerequisites You have access to the internet to obtain the necessary container images. You have installed the OpenShift CLI ( oc ). You have installed the oc-mirror CLI plugin. You have created the image set configuration file. Procedure Run the oc mirror command to mirror the images from the specified image set configuration to disk: USD oc mirror --config=./imageset-config.yaml \ 1 file://<path_to_output_directory> 2 1 Pass in the image set configuration file that was created. This procedure assumes that it is named imageset-config.yaml . 2 Specify the target directory where you want to output the image set file. The target directory path must start with file:// . Verification Navigate to your output directory: USD cd <path_to_output_directory> Verify that an image set .tar file was created: USD ls Example output mirror_seq1_000000.tar steps Transfer the image set .tar file to the disconnected environment. Troubleshooting Unable to retrieve source image . 4.7.2.2. Mirroring from disk to mirror You can use the oc-mirror plugin to mirror the contents of a generated image set to the target mirror registry. Prerequisites You have installed the OpenShift CLI ( oc ) in the disconnected environment. You have installed the oc-mirror CLI plugin in the disconnected environment. You have generated the image set file by using the oc mirror command. You have transferred the image set file to the disconnected environment. Procedure Run the oc mirror command to process the image set file on disk and mirror the contents to a target mirror registry: USD oc mirror --from=./mirror_seq1_000000.tar \ 1 docker://registry.example:5000 2 1 Pass in the image set .tar file to mirror, named mirror_seq1_000000.tar in this example. If an archiveSize value was specified in the image set configuration file, the image set might be broken up into multiple .tar files. In this situation, you can pass in a directory that contains the image set .tar files. 2 Specify the registry to mirror the image set file to. The registry must start with docker:// . If you specify a top-level namespace for the mirror registry, you must also use this same namespace on subsequent executions. This command updates the mirror registry with the image set and generates the ImageContentSourcePolicy and CatalogSource resources. Verification Navigate into the oc-mirror-workspace/ directory that was generated. Navigate into the results directory, for example, results-1639608409/ . Verify that YAML files are present for the ImageContentSourcePolicy and CatalogSource resources. steps Configure your cluster to use the resources generated by oc-mirror. Troubleshooting Unable to retrieve source image . 4.8. Configuring your cluster to use the resources generated by oc-mirror After you have mirrored your image set to the mirror registry, you must apply the generated ImageContentSourcePolicy , CatalogSource , and release image signature resources into the cluster. The ImageContentSourcePolicy resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The CatalogSource resource is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry. The release image signatures are used to verify the mirrored release images. Prerequisites You have mirrored the image set to the registry mirror in the disconnected environment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI as a user with the cluster-admin role. Apply the YAML files from the results directory to the cluster by running the following command: USD oc apply -f ./oc-mirror-workspace/results-1639608409/ If you mirrored release images, apply the release image signatures to the cluster by running the following command: USD oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/ Note If you are mirroring Operators instead of clusters, you do not need to run USD oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/ . Running that command will return an error, as there are no release image signatures to apply. Verification Verify that the ImageContentSourcePolicy resources were successfully installed by running the following command: USD oc get imagecontentsourcepolicy Verify that the CatalogSource resources were successfully installed by running the following command: USD oc get catalogsource -n openshift-marketplace 4.9. Updating your mirror registry content You can update your mirror registry content by updating the image set configuration file and mirroring the image set to the mirror registry. The time that you run the oc-mirror plugin, an image set is generated that only contains new and updated images since the execution. While updating the mirror registry, you must take into account the following considerations: Images are pruned from the target mirror registry if they are no longer included in the latest image set that was generated and mirrored. Therefore, ensure that you are updating images for the same combination of the following key components so that only a differential image set is created and mirrored: Image set configuration Destination registry Storage configuration The images can be pruned in case of disk to mirror or mirror to mirror workflow. The generated image sets must be pushed to the target mirror registry in sequence. You can derive the sequence number from the file name of the generated image set archive file. Do not delete or modify the metadata image that is generated by the oc-mirror plugin. If you specified a top-level namespace for the mirror registry during the initial image set creation, then you must use this same namespace every time you run the oc-mirror plugin for the same mirror registry. For more information about the workflow to update the mirror registry content, see the "High level workflow" section. 4.9.1. Mirror registry update examples This section covers the use cases for updating the mirror registry from disk to mirror. Example ImageSetConfiguration file that was previously used for mirroring apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.12.1 maxVersion: 4.12.1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: rhacs-operator channels: - name: stable Mirroring a specific OpenShift Container Platform version by pruning the existing images Updated ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.13 1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: rhacs-operator channels: - name: stable 1 Replacing by stable-4.13 prunes all the images of stable-4.12 . Updating to the latest version of an Operator by pruning the existing images Updated ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.12.1 maxVersion: 4.12.1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: rhacs-operator channels: - name: stable 1 1 Using the same channel without specifying a version prunes the existing images and updates with the latest version of images. Mirroring a new Operator by pruning the existing Operator Updated ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.12.1 maxVersion: 4.12.1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: <new_operator_name> 1 channels: - name: stable 1 Replacing rhacs-operator with new_operator_name prunes the Red Hat Advanced Cluster Security for Kubernetes Operator. Pruning all the OpenShift Container Platform images Updated ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: Additional resources Image set configuration examples Mirroring an image set in a partially disconnected environment Mirroring an image set in a fully disconnected environment Configuring your cluster to use the resources generated by oc-mirror 4.10. Performing a dry run You can use oc-mirror to perform a dry run, without actually mirroring any images. This allows you to review the list of images that would be mirrored, as well as any images that would be pruned from the mirror registry. A dry run also allows you to catch any errors with your image set configuration early or use the generated list of images with other tools to carry out the mirroring operation. Prerequisites You have access to the internet to obtain the necessary container images. You have installed the OpenShift CLI ( oc ). You have installed the oc-mirror CLI plugin. You have created the image set configuration file. Procedure Run the oc mirror command with the --dry-run flag to perform a dry run: USD oc mirror --config=./imageset-config.yaml \ 1 docker://registry.example:5000 \ 2 --dry-run 3 1 Pass in the image set configuration file that was created. This procedure assumes that it is named imageset-config.yaml . 2 Specify the mirror registry. Nothing is mirrored to this registry as long as you use the --dry-run flag. 3 Use the --dry-run flag to generate the dry run artifacts and not an actual image set file. Example output Checking push permissions for registry.example:5000 Creating directory: oc-mirror-workspace/src/publish Creating directory: oc-mirror-workspace/src/v2 Creating directory: oc-mirror-workspace/src/charts Creating directory: oc-mirror-workspace/src/release-signatures No metadata detected, creating new workspace wrote mirroring manifests to oc-mirror-workspace/operators.1658342351/manifests-redhat-operator-index ... info: Planning completed in 31.48s info: Dry run complete Writing image mapping to oc-mirror-workspace/mapping.txt Navigate into the workspace directory that was generated: USD cd oc-mirror-workspace/ Review the mapping.txt file that was generated. This file contains a list of all images that would be mirrored. Review the pruning-plan.json file that was generated. This file contains a list of all images that would be pruned from the mirror registry when the image set is published. Note The pruning-plan.json file is only generated if your oc-mirror command points to your mirror registry and there are images to be pruned. 4.11. Including local OCI Operator catalogs While mirroring OpenShift Container Platform releases, Operator catalogs, and additional images from a registry to a partially disconnected cluster, you can include Operator catalog images from a local file-based catalog on disk. The local catalog must be in the Open Container Initiative (OCI) format. The local catalog and its contents are mirrored to your target mirror registry based on the filtering information in the image set configuration file. Important When mirroring local OCI catalogs, any OpenShift Container Platform releases or additional images that you want to mirror along with the local OCI-formatted catalog must be pulled from a registry. You cannot mirror OCI catalogs along with an oc-mirror image set file on disk. One example use case for using the OCI feature is if you have a CI/CD system building an OCI catalog to a location on disk, and you want to mirror that OCI catalog along with an OpenShift Container Platform release to your mirror registry. Note If you used the Technology Preview OCI local catalogs feature for the oc-mirror plugin for OpenShift Container Platform 4.12, you can no longer use the OCI local catalogs feature of the oc-mirror plugin to copy a catalog locally and convert it to OCI format as a first step to mirroring to a fully disconnected cluster. Prerequisites You have access to the internet to obtain the necessary container images. You have installed the OpenShift CLI ( oc ). You have installed the oc-mirror CLI plugin. Procedure Create the image set configuration file and adjust the settings as necessary. The following example image set configuration mirrors an OCI catalog on disk along with an OpenShift Container Platform release and a UBI image from registry.redhat.io . kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: local: path: /home/user/metadata 1 mirror: platform: channels: - name: stable-4.16 2 type: ocp graph: false operators: - catalog: oci:///home/user/oc-mirror/my-oci-catalog 3 targetCatalog: my-namespace/redhat-operator-index 4 packages: - name: aws-load-balancer-operator - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 5 packages: - name: rhacs-operator additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 6 1 Set the back-end location to save the image set metadata to. This location can be a registry or local directory. It is required to specify storageConfig values. 2 Optionally, include an OpenShift Container Platform release to mirror from registry.redhat.io . 3 Specify the absolute path to the location of the OCI catalog on disk. The path must start with oci:// when using the OCI feature. 4 Optionally, specify an alternative namespace and name to mirror the catalog as. 5 Optionally, specify additional Operator catalogs to pull from a registry. 6 Optionally, specify additional images to pull from a registry. Run the oc mirror command to mirror the OCI catalog to a target mirror registry: USD oc mirror --config=./imageset-config.yaml \ 1 docker://registry.example:5000 2 1 Pass in the image set configuration file. This procedure assumes that it is named imageset-config.yaml . 2 Specify the registry to mirror the content to. The registry must start with docker:// . If you specify a top-level namespace for the mirror registry, you must also use this same namespace on subsequent executions. Optionally, you can specify other flags to adjust the behavior of the OCI feature: --oci-insecure-signature-policy Do not push signatures to the target mirror registry. --oci-registries-config Specify the path to a TOML-formatted registries.conf file. You can use this to mirror from a different registry, such as a pre-production location for testing, without having to change the image set configuration file. This flag only affects local OCI catalogs, not any other mirrored content. Example registries.conf file [[registry]] location = "registry.redhat.io:5000" insecure = false blocked = false mirror-by-digest-only = true prefix = "" [[registry.mirror]] location = "preprod-registry.example.com" insecure = false steps Configure your cluster to use the resources generated by oc-mirror. Additional resources Configuring your cluster to use the resources generated by oc-mirror 4.12. Image set configuration parameters The oc-mirror plugin requires an image set configuration file that defines what images to mirror. The following table lists the available parameters for the ImageSetConfiguration resource. Table 4.1. ImageSetConfiguration parameters Parameter Description Values apiVersion The API version for the ImageSetConfiguration content. String. For example: mirror.openshift.io/v1alpha2 . archiveSize The maximum size, in GiB, of each archive file within the image set. Integer. For example: 4 mirror The configuration of the image set. Object mirror.additionalImages The additional images configuration of the image set. Array of objects. For example: additionalImages: - name: registry.redhat.io/ubi8/ubi:latest mirror.additionalImages.name The tag or digest of the image to mirror. String. For example: registry.redhat.io/ubi8/ubi:latest mirror.blockedImages The full tag, digest, or pattern of images to block from mirroring. Array of strings. For example: docker.io/library/alpine mirror.helm The helm configuration of the image set. Note that the oc-mirror plugin supports only helm charts that do not require user input when rendered. Object mirror.helm.local The local helm charts to mirror. Array of objects. For example: local: - name: podinfo path: /test/podinfo-5.0.0.tar.gz mirror.helm.local.name The name of the local helm chart to mirror. String. For example: podinfo . mirror.helm.local.path The path of the local helm chart to mirror. String. For example: /test/podinfo-5.0.0.tar.gz . mirror.helm.repositories The remote helm repositories to mirror from. Array of objects. For example: repositories: - name: podinfo url: https://example.github.io/podinfo charts: - name: podinfo version: 5.0.0 mirror.helm.repositories.name The name of the helm repository to mirror from. String. For example: podinfo . mirror.helm.repositories.url The URL of the helm repository to mirror from. String. For example: https://example.github.io/podinfo . mirror.helm.repositories.charts The remote helm charts to mirror. Array of objects. mirror.helm.repositories.charts.name The name of the helm chart to mirror. String. For example: podinfo . mirror.helm.repositories.charts.version The version of the named helm chart to mirror. String. For example: 5.0.0 . mirror.operators The Operators configuration of the image set. Array of objects. For example: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: elasticsearch-operator minVersion: '2.4.0' mirror.operators.catalog The Operator catalog to include in the image set. String. For example: registry.redhat.io/redhat/redhat-operator-index:v4.16 . mirror.operators.full When true , downloads the full catalog, Operator package, or Operator channel. Boolean. The default value is false . mirror.operators.packages The Operator packages configuration. Array of objects. For example: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: elasticsearch-operator minVersion: '5.2.3-31' mirror.operators.packages.name The Operator package name to include in the image set String. For example: elasticsearch-operator . mirror.operators.packages.channels The Operator package channel configuration. Object mirror.operators.packages.channels.name The Operator channel name, unique within a package, to include in the image set. String. For example: fast or stable-v4.16 . mirror.operators.packages.channels.maxVersion The highest version of the Operator mirror across all channels in which it exists. See the following note for further information. String. For example: 5.2.3-31 mirror.operators.packages.channels.minBundle The name of the minimum bundle to include, plus all bundles in the update graph to the channel head. Set this field only if the named bundle has no semantic version metadata. String. For example: bundleName mirror.operators.packages.channels.minVersion The lowest version of the Operator to mirror across all channels in which it exists. See the following note for further information. String. For example: 5.2.3-31 mirror.operators.packages.maxVersion The highest version of the Operator to mirror across all channels in which it exists. See the following note for further information. String. For example: 5.2.3-31 . mirror.operators.packages.minVersion The lowest version of the Operator to mirror across all channels in which it exists. See the following note for further information. String. For example: 5.2.3-31 . mirror.operators.skipDependencies If true , dependencies of bundles are not included. Boolean. The default value is false . mirror.operators.targetCatalog An alternative name and optional namespace hierarchy to mirror the referenced catalog as. String. For example: my-namespace/my-operator-catalog mirror.operators.targetName An alternative name to mirror the referenced catalog as. The targetName parameter is deprecated. Use the targetCatalog parameter instead. String. For example: my-operator-catalog mirror.operators.targetTag An alternative tag to append to the targetName or targetCatalog . String. For example: v1 mirror.platform The platform configuration of the image set. Object mirror.platform.architectures The architecture of the platform release payload to mirror. Array of strings. For example: architectures: - amd64 - arm64 - multi - ppc64le - s390x The default value is amd64 . The value multi ensures that the mirroring is supported for all available architectures, eliminating the need to specify individual architectures. mirror.platform.channels The platform channel configuration of the image set. Array of objects. For example: channels: - name: stable-4.10 - name: stable-4.16 mirror.platform.channels.full When true , sets the minVersion to the first release in the channel and the maxVersion to the last release in the channel. Boolean. The default value is false . mirror.platform.channels.name The name of the release channel. String. For example: stable-4.16 mirror.platform.channels.minVersion The minimum version of the referenced platform to be mirrored. String. For example: 4.12.6 mirror.platform.channels.maxVersion The highest version of the referenced platform to be mirrored. String. For example: 4.16.1 mirror.platform.channels.shortestPath Toggles shortest path mirroring or full range mirroring. Boolean. The default value is false . mirror.platform.channels.type The type of the platform to be mirrored. String. For example: ocp or okd . The default is ocp . mirror.platform.graph Indicates whether the OSUS graph is added to the image set and subsequently published to the mirror. Boolean. The default value is false . storageConfig The back-end configuration of the image set. Object storageConfig.local The local back-end configuration of the image set. Object storageConfig.local.path The path of the directory to contain the image set metadata. String. For example: ./path/to/dir/ . storageConfig.registry The registry back-end configuration of the image set. Object storageConfig.registry.imageURL The back-end registry URI. Can optionally include a namespace reference in the URI. String. For example: quay.io/myuser/imageset:metadata . storageConfig.registry.skipTLS Optionally skip TLS verification of the referenced back-end registry. Boolean. The default value is false . Note Using the minVersion and maxVersion properties to filter for a specific Operator version range can result in a multiple channel heads error. The error message states that there are multiple channel heads . This is because when the filter is applied, the update graph of the Operator is truncated. Operator Lifecycle Manager requires that every Operator channel contains versions that form an update graph with exactly one end point, that is, the latest version of the Operator. When the filter range is applied, that graph can turn into two or more separate graphs or a graph that has more than one end point. To avoid this error, do not filter out the latest version of an Operator. If you still run into the error, depending on the Operator, either the maxVersion property must be increased or the minVersion property must be decreased. Because every Operator graph can be different, you might need to adjust these values until the error resolves. 4.13. Image set configuration examples The following ImageSetConfiguration file examples show the configuration for various mirroring use cases. Use case: Including the shortest OpenShift Container Platform update path The following ImageSetConfiguration file uses a local storage backend and includes all OpenShift Container Platform versions along the shortest update path from the minimum version of 4.11.37 to the maximum version of 4.12.15 . Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.11.37 maxVersion: 4.12.15 shortestPath: true Use case: Including all versions of OpenShift Container Platform from a minimum to the latest version for multi-architecture releases The following ImageSetConfiguration file uses a registry storage backend and includes all OpenShift Container Platform versions starting at a minimum version of 4.13.4 to the latest version in the channel. On every invocation of oc-mirror with this image set configuration, the latest release of the stable-4.13 channel is evaluated, so running oc-mirror at regular intervals ensures that you automatically receive the latest releases of OpenShift Container Platform images. By setting the value of platform.architectures to multi , you can ensure that the mirroring is supported for multi-architecture releases. Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: architectures: - "multi" channels: - name: stable-4.13 minVersion: 4.13.4 maxVersion: 4.13.6 Use case: Including Operator versions from a minimum to the latest The following ImageSetConfiguration file uses a local storage backend and includes only the Red Hat Advanced Cluster Security for Kubernetes Operator, versions starting at 4.0.1 and later in the stable channel. Note When you specify a minimum or maximum version range, you might not receive all Operator versions in that range. By default, oc-mirror excludes any versions that are skipped or replaced by a newer version in the Operator Lifecycle Manager (OLM) specification. Operator versions that are skipped might be affected by a CVE or contain bugs. Use a newer version instead. For more information on skipped and replaced versions, see Creating an update graph with OLM . To receive all Operator versions in a specified range, you can set the mirror.operators.full field to true . Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: rhacs-operator channels: - name: stable minVersion: 4.0.1 Note To specify a maximum version instead of the latest, set the mirror.operators.packages.channels.maxVersion field. Use case: Including the Nutanix CSI Operator The following ImageSetConfiguration file uses a local storage backend and includes the Nutanix CSI Operator, the OpenShift Update Service (OSUS) graph image, and an additional Red Hat Universal Base Image (UBI). Example ImageSetConfiguration file kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: mylocalregistry/ocp-mirror/openshift4 skipTLS: false mirror: platform: channels: - name: stable-4.11 type: ocp graph: true operators: - catalog: registry.redhat.io/redhat/certified-operator-index:v4.16 packages: - name: nutanixcsioperator channels: - name: stable additionalImages: - name: registry.redhat.io/ubi9/ubi:latest Use case: Including the default Operator channel The following ImageSetConfiguration file includes the stable-5.7 and stable channels for the OpenShift Elasticsearch Operator. Even if only the packages from the stable-5.7 channel are needed, the stable channel must also be included in the ImageSetConfiguration file, because it is the default channel for the Operator. You must always include the default channel for the Operator package even if you do not use the bundles in that channel. Tip You can find the default channel by running the following command: oc mirror list operators --catalog=<catalog_name> --package=<package_name> . Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: elasticsearch-operator channels: - name: stable-5.7 - name: stable Use case: Including an entire catalog (all versions) The following ImageSetConfiguration file sets the mirror.operators.full field to true to include all versions for an entire Operator catalog. Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 full: true Use case: Including an entire catalog (channel heads only) The following ImageSetConfiguration file includes the channel heads for an entire Operator catalog. By default, for each Operator in the catalog, oc-mirror includes the latest Operator version (channel head) from the default channel. If you want to mirror all Operator versions, and not just the channel heads, you must set the mirror.operators.full field to true . This example also uses the targetCatalog field to specify an alternative namespace and name to mirror the catalog as. Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 targetCatalog: my-namespace/my-operator-catalog Use case: Including arbitrary images and helm charts The following ImageSetConfiguration file uses a registry storage backend and includes helm charts and an additional Red Hat Universal Base Image (UBI). Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration archiveSize: 4 storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: architectures: - "s390x" channels: - name: stable-4.16 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 helm: repositories: - name: redhat-helm-charts url: https://raw.githubusercontent.com/redhat-developer/redhat-helm-charts/master charts: - name: ibm-mongodb-enterprise-helm version: 0.2.0 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest Use case: Including the upgrade path for EUS releases The following ImageSetConfiguration file includes the eus-<version> channel, where the maxVersion value is at least two minor versions higher than the minVersion value. For example, in this ImageSetConfiguration file, the minVersion is set to 4.12.28 , while the maxVersion for the eus-4.14 channel is 4.14.16 . Example ImageSetConfiguration file kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v2alpha1 mirror: platform: graph: true # Required for the OSUS Operator architectures: - amd64 channels: - name: stable-4.12 minVersion: '4.12.28' maxVersion: '4.12.28' shortestPath: true type: ocp - name: eus-4.14 minVersion: '4.12.28' maxVersion: '4.14.16' shortestPath: true type: ocp 4.14. Command reference for oc-mirror The following tables describe the oc mirror subcommands and flags: Table 4.2. oc mirror subcommands Subcommand Description completion Generate the autocompletion script for the specified shell. describe Output the contents of an image set. help Show help about any subcommand. init Output an initial image set configuration template. list List available platform and Operator content and their version. version Output the oc-mirror version. Table 4.3. oc mirror flags Flag Description -c , --config <string> Specify the path to an image set configuration file. --continue-on-error If any non image-pull related error occurs, continue and attempt to mirror as much as possible. --dest-skip-tls Disable TLS validation for the target registry. --dest-use-http Use plain HTTP for the target registry. --dry-run Print actions without mirroring images. Generates mapping.txt and pruning-plan.json files. --from <string> Specify the path to an image set archive that was generated by an execution of oc-mirror to load into a target registry. -h , --help Show the help. --ignore-history Ignore past mirrors when downloading images and packing layers. Disables incremental mirroring and might download more data. --manifests-only Generate manifests for ImageContentSourcePolicy objects to configure a cluster to use the mirror registry, but do not actually mirror any images. To use this flag, you must pass in an image set archive with the --from flag. --max-nested-paths <int> Specify the maximum number of nested paths for destination registries that limit nested paths. The default is 0 . --max-per-registry <int> Specify the number of concurrent requests allowed per registry. The default is 6 . --oci-insecure-signature-policy Do not push signatures when mirroring local OCI catalogs (with --include-local-oci-catalogs ). --oci-registries-config Provide a registries configuration file to specify an alternative registry location to copy from when mirroring local OCI catalogs (with --include-local-oci-catalogs ). --skip-cleanup Skip removal of artifact directories. --skip-image-pin Do not replace image tags with digest pins in Operator catalogs. --skip-metadata-check Skip metadata when publishing an image set. This is only recommended when the image set was created with --ignore-history . --skip-missing If an image is not found, skip it instead of reporting an error and aborting execution. Does not apply to custom images explicitly specified in the image set configuration. --skip-pruning Disable automatic pruning of images from the target mirror registry. --skip-verification Skip digest verification. --source-skip-tls Disable TLS validation for the source registry. --source-use-http Use plain HTTP for the source registry. -v , --verbose <int> Specify the number for the log level verbosity. Valid values are 0 - 9 . The default is 0 . 4.15. Additional resources About cluster updates in a disconnected environment
[ "tar xvzf oc-mirror.tar.gz", "chmod +x oc-mirror", "sudo mv oc-mirror /usr/local/bin/.", "oc mirror help", "cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "mkdir -p <directory_name>", "cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file>", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "oc mirror init --registry <storage_backend> > imageset-config.yaml 1", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.16 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 6 packages: - name: serverless-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 9 helm: {}", "oc mirror --config=./<imageset-config.yaml> \\ 1 docker://registry.example:5000 2", "oc mirror --config=./imageset-config.yaml \\ 1 file://<path_to_output_directory> 2", "cd <path_to_output_directory>", "ls", "mirror_seq1_000000.tar", "oc mirror --from=./mirror_seq1_000000.tar \\ 1 docker://registry.example:5000 2", "oc apply -f ./oc-mirror-workspace/results-1639608409/", "oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/", "oc get imagecontentsourcepolicy", "oc get catalogsource -n openshift-marketplace", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.12.1 maxVersion: 4.12.1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: rhacs-operator channels: - name: stable", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.13 1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: rhacs-operator channels: - name: stable", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.12.1 maxVersion: 4.12.1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: rhacs-operator channels: - name: stable 1", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.12.1 maxVersion: 4.12.1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: <new_operator_name> 1 channels: - name: stable", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages:", "oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 \\ 2 --dry-run 3", "Checking push permissions for registry.example:5000 Creating directory: oc-mirror-workspace/src/publish Creating directory: oc-mirror-workspace/src/v2 Creating directory: oc-mirror-workspace/src/charts Creating directory: oc-mirror-workspace/src/release-signatures No metadata detected, creating new workspace wrote mirroring manifests to oc-mirror-workspace/operators.1658342351/manifests-redhat-operator-index info: Planning completed in 31.48s info: Dry run complete Writing image mapping to oc-mirror-workspace/mapping.txt", "cd oc-mirror-workspace/", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: local: path: /home/user/metadata 1 mirror: platform: channels: - name: stable-4.16 2 type: ocp graph: false operators: - catalog: oci:///home/user/oc-mirror/my-oci-catalog 3 targetCatalog: my-namespace/redhat-operator-index 4 packages: - name: aws-load-balancer-operator - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 5 packages: - name: rhacs-operator additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 6", "oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 2", "[[registry]] location = \"registry.redhat.io:5000\" insecure = false blocked = false mirror-by-digest-only = true prefix = \"\" [[registry.mirror]] location = \"preprod-registry.example.com\" insecure = false", "additionalImages: - name: registry.redhat.io/ubi8/ubi:latest", "local: - name: podinfo path: /test/podinfo-5.0.0.tar.gz", "repositories: - name: podinfo url: https://example.github.io/podinfo charts: - name: podinfo version: 5.0.0", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: elasticsearch-operator minVersion: '2.4.0'", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: elasticsearch-operator minVersion: '5.2.3-31'", "architectures: - amd64 - arm64 - multi - ppc64le - s390x", "channels: - name: stable-4.10 - name: stable-4.16", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.11.37 maxVersion: 4.12.15 shortestPath: true", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: architectures: - \"multi\" channels: - name: stable-4.13 minVersion: 4.13.4 maxVersion: 4.13.6", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: rhacs-operator channels: - name: stable minVersion: 4.0.1", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: mylocalregistry/ocp-mirror/openshift4 skipTLS: false mirror: platform: channels: - name: stable-4.11 type: ocp graph: true operators: - catalog: registry.redhat.io/redhat/certified-operator-index:v4.16 packages: - name: nutanixcsioperator channels: - name: stable additionalImages: - name: registry.redhat.io/ubi9/ubi:latest", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: elasticsearch-operator channels: - name: stable-5.7 - name: stable", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 full: true", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 targetCatalog: my-namespace/my-operator-catalog", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration archiveSize: 4 storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: architectures: - \"s390x\" channels: - name: stable-4.16 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 helm: repositories: - name: redhat-helm-charts url: https://raw.githubusercontent.com/redhat-developer/redhat-helm-charts/master charts: - name: ibm-mongodb-enterprise-helm version: 0.2.0 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v2alpha1 mirror: platform: graph: true # Required for the OSUS Operator architectures: - amd64 channels: - name: stable-4.12 minVersion: '4.12.28' maxVersion: '4.12.28' shortestPath: true type: ocp - name: eus-4.14 minVersion: '4.12.28' maxVersion: '4.14.16' shortestPath: true type: ocp" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/disconnected_installation_mirroring/installing-mirroring-disconnected
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_openshift_container_platform_hardware_bare_metal_certification_policy_guide/con-conscious-language-message
2.2. Man Pages
2.2. Man Pages This section lists man pages that are relevant to Red Hat Cluster Suite, as an additional resource. Cluster Infrastructure ccs_tool (8) - The tool used to make online updates of CCS config files ccs_test (8) - The diagnostic tool for a running Cluster Configuration System ccsd (8) - The daemon used to access CCS cluster configuration files ccs (7) - Cluster Configuration System cman_tool (8) - Cluster Management Tool cluster.conf [cluster] (5) - The configuration file for cluster products qdisk (5) - a disk-based quorum daemon for CMAN / Linux-Cluster mkqdisk (8) - Cluster Quorum Disk Utility qdiskd (8) - Cluster Quorum Disk Daemon fence_ack_manual (8) - program run by an operator as a part of manual I/O Fencing fence_apc (8) - I/O Fencing agent for APC MasterSwitch fence_bladecenter (8) - I/O Fencing agent for IBM Bladecenter fence_brocade (8) - I/O Fencing agent for Brocade FC switches fence_bullpap (8) - I/O Fencing agent for Bull FAME architecture controlled by a PAP management console fence_drac (8) - fencing agent for Dell Remote Access Card fence_egenera (8) - I/O Fencing agent for the Egenera BladeFrame fence_gnbd (8) - I/O Fencing agent for GNBD-based GFS clusters fence_ilo (8) - I/O Fencing agent for HP Integrated Lights Out card fence_ipmilan (8) - I/O Fencing agent for machines controlled by IPMI over LAN fence_manual (8) - program run by fenced as a part of manual I/O Fencing fence_mcdata (8) - I/O Fencing agent for McData FC switches fence_node (8) - A program which performs I/O fencing on a single node fence_rib (8) - I/O Fencing agent for Compaq Remote Insight Lights Out card fence_rsa (8) - I/O Fencing agent for IBM RSA II fence_sanbox2 (8) - I/O Fencing agent for QLogic SANBox2 FC switches fence_scsi (8) - I/O fencing agent for SCSI persistent reservations fence_tool (8) - A program to join and leave the fence domain fence_vixel (8) - I/O Fencing agent for Vixel FC switches fence_wti (8) - I/O Fencing agent for WTI Network Power Switch fence_xvm (8) - I/O Fencing agent for Xen virtual machines fence_xvmd (8) - I/O Fencing agent host for Xen virtual machines fenced (8) - the I/O Fencing daemon High-availability Service Management clusvcadm (8) - Cluster User Service Administration Utility clustat (8) - Cluster Status Utility Clurgmgrd [clurgmgrd] (8) - Resource Group (Cluster Service) Manager Daemon clurmtabd (8) - Cluster NFS Remote Mount Table Daemon GFS gfs_fsck (8) - Offline GFS file system checker gfs_grow (8) - Expand a GFS filesystem gfs_jadd (8) - Add journals to a GFS filesystem gfs_mount (8) - GFS mount options gfs_quota (8) - Manipulate GFS disk quotas gfs_tool (8) - interface to gfs ioctl calls Cluster Logical Volume Manager clvmd (8) - cluster LVM daemon lvm (8) - LVM2 tools lvm.conf [lvm] (5) - Configuration file for LVM2 lvmchange (8) - change attributes of the logical volume manager pvcreate (8) - initialize a disk or partition for use by LVM lvs (8) - report information about logical volumes Global Network Block Device gnbd_export (8) - the interface to export GNBDs gnbd_import (8) - manipulate GNBD block devices on a client gnbd_serv (8) - gnbd server daemon LVS pulse (8) - heartbeating daemon for monitoring the health of cluster nodes lvs.cf [lvs] (5) - configuration file for lvs lvscan (8) - scan (all disks) for logical volumes lvsd (8) - daemon to control the Red Hat clustering services ipvsadm (8) - Linux Virtual Server administration ipvsadm-restore (8) - restore the IPVS table from stdin ipvsadm-save (8) - save the IPVS table to stdout nanny (8) - tool to monitor status of services in a cluster send_arp (8) - tool to notify network of a new IP address / MAC address mapping
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s1-man-pages-CSO
Chapter 7. Troubleshooting
Chapter 7. Troubleshooting 7.1. Troubleshooting installations 7.1.1. Determining where installation issues occur When troubleshooting OpenShift Container Platform installation issues, you can monitor installation logs to determine at which stage issues occur. Then, retrieve diagnostic data relevant to that stage. OpenShift Container Platform installation proceeds through the following stages: Ignition configuration files are created. The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. The control plane machines use the bootstrap machine to form an etcd cluster. The bootstrap machine starts a temporary Kubernetes control plane using the new etcd cluster. The temporary control plane schedules the production control plane to the control plane machines. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine adds OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. The control plane sets up the worker nodes. The control plane installs additional services in the form of a set of Operators. The cluster downloads and configures remaining components needed for the day-to-day operation, including the creation of worker machines in supported environments. 7.1.2. User-provisioned infrastructure installation considerations The default installation method uses installer-provisioned infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. You can alternatively install OpenShift Container Platform 4.17 on infrastructure that you provide. If you use this installation method, follow user-provisioned infrastructure installation documentation carefully. Additionally, review the following considerations before the installation: Check the Red Hat Enterprise Linux (RHEL) Ecosystem to determine the level of Red Hat Enterprise Linux CoreOS (RHCOS) support provided for your chosen server hardware or virtualization technology. Many virtualization and cloud environments require agents to be installed on guest operating systems. Ensure that these agents are installed as a containerized workload deployed through a daemon set. Install cloud provider integration if you want to enable features such as dynamic storage, on-demand service routing, node hostname to Kubernetes hostname resolution, and cluster autoscaling. Note It is not possible to enable cloud provider integration in OpenShift Container Platform environments that mix resources from different cloud providers, or that span multiple physical or virtual platforms. The node life cycle controller will not allow nodes that are external to the existing provider to be added to a cluster, and it is not possible to specify more than one cloud provider integration. A provider-specific Machine API implementation is required if you want to use machine sets or autoscaling to automatically provision OpenShift Container Platform cluster nodes. Check whether your chosen cloud provider offers a method to inject Ignition configuration files into hosts as part of their initial deployment. If they do not, you will need to host Ignition configuration files by using an HTTP server. The steps taken to troubleshoot Ignition configuration file issues will differ depending on which of these two methods is deployed. Storage needs to be manually provisioned if you want to leverage optional framework components such as the embedded container registry, Elasticsearch, or Prometheus. Default storage classes are not defined in user-provisioned infrastructure installations unless explicitly configured. A load balancer is required to distribute API requests across all control plane nodes in highly available OpenShift Container Platform environments. You can use any TCP-based load balancing solution that meets OpenShift Container Platform DNS routing and port requirements. 7.1.3. Checking a load balancer configuration before OpenShift Container Platform installation Check your load balancer configuration prior to starting an OpenShift Container Platform installation. Prerequisites You have configured an external load balancer of your choosing, in preparation for an OpenShift Container Platform installation. The following example is based on a Red Hat Enterprise Linux (RHEL) host using HAProxy to provide load balancing services to a cluster. You have configured DNS in preparation for an OpenShift Container Platform installation. You have SSH access to your load balancer. Procedure Check that the haproxy systemd service is active: USD ssh <user_name>@<load_balancer> systemctl status haproxy Verify that the load balancer is listening on the required ports. The following example references ports 80 , 443 , 6443 , and 22623 . For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 6, verify port status by using the netstat command: USD ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623' For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 7 or 8, verify port status by using the ss command: USD ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623' Note Red Hat recommends the ss command instead of netstat in Red Hat Enterprise Linux (RHEL) 7 or later. ss is provided by the iproute package. For more information on the ss command, see the Red Hat Enterprise Linux (RHEL) 7 Performance Tuning Guide . Check that the wildcard DNS record resolves to the load balancer: USD dig <wildcard_fqdn> @<dns_server> 7.1.4. Specifying OpenShift Container Platform installer log levels By default, the OpenShift Container Platform installer log level is set to info . If more detailed logging is required when diagnosing a failed OpenShift Container Platform installation, you can increase the openshift-install log level to debug when starting the installation again. Prerequisites You have access to the installation host. Procedure Set the installation log level to debug when initiating the installation: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug 1 1 Possible log levels include info , warn , error, and debug . 7.1.5. Troubleshooting openshift-install command issues If you experience issues running the openshift-install command, check the following: The installation has been initiated within 24 hours of Ignition configuration file creation. The Ignition files are created when the following command is run: USD ./openshift-install create ignition-configs --dir=./install_dir The install-config.yaml file is in the same directory as the installer. If an alternative installation path is declared by using the ./openshift-install --dir option, verify that the install-config.yaml file exists within that directory. 7.1.6. Monitoring installation progress You can monitor high-level installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. This provides greater visibility into how an installation progresses and helps identify the stage at which an installation failure occurs. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have the fully qualified domain names of the bootstrap and control plane nodes. Note The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host. Procedure Watch the installation log as the installation progresses: USD tail -f ~/<installation_directory>/.openshift_install.log Monitor the bootkube.service journald unit log on the bootstrap node, after it has booted. This provides visibility into the bootstrapping of the first control plane. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service Note The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. Monitor kubelet.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity. Monitor the logs using oc : USD oc adm node-logs --role=master -u kubelet If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service Monitor crio.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity. Monitor the logs using oc : USD oc adm node-logs --role=master -u crio If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh [email protected]_name.sub_domain.domain journalctl -b -f -u crio.service 7.1.7. Gathering bootstrap node diagnostic data When experiencing bootstrap-related issues, you can gather bootkube.service journald unit logs and container logs from the bootstrap node. Prerequisites You have SSH access to your bootstrap node. You have the fully qualified domain name of the bootstrap node. If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server's fully qualified domain name and the port number. You must also have SSH access to the HTTP host. Procedure If you have access to the bootstrap node's console, monitor the console until the node reaches the login prompt. Verify the Ignition file configuration. If you are hosting Ignition configuration files by using an HTTP server. Verify the bootstrap node Ignition file URL. Replace <http_server_fqdn> with HTTP server's fully qualified domain name: USD curl -I http://<http_server_fqdn>:<port>/bootstrap.ign 1 1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found . To verify that the Ignition file was received by the bootstrap node, query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files, enter the following command: USD grep -is 'bootstrap.ign' /var/log/httpd/access_log If the bootstrap Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded. If the Ignition file was not received, check that the Ignition files exist and that they have the appropriate file and web server permissions on the serving host directly. If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment. Review the bootstrap node's console to determine if the mechanism is injecting the bootstrap node Ignition file correctly. Verify the availability of the bootstrap node's assigned storage device. Verify that the bootstrap node has been assigned an IP address from the DHCP server. Collect bootkube.service journald unit logs from the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service Note The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. Collect logs from the bootstrap node containers. Collect the logs using podman on the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done' If the bootstrap process fails, verify the following. You can resolve api.<cluster_name>.<base_domain> from the installation host. The load balancer proxies port 6443 connections to bootstrap and control plane nodes. Ensure that the proxy configuration meets OpenShift Container Platform installation requirements. 7.1.8. Investigating control plane node installation issues If you experience control plane node installation issues, determine the control plane node OpenShift Container Platform software defined network (SDN), and network Operator status. Collect kubelet.service , crio.service journald unit logs, and control plane node container logs for visibility into control plane node agent, CRI-O container runtime, and pod activity. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have the fully qualified domain names of the bootstrap and control plane nodes. If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server's fully qualified domain name and the port number. You must also have SSH access to the HTTP host. Note The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host. Procedure If you have access to the console for the control plane node, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console. Verify Ignition file configuration. If you are hosting Ignition configuration files by using an HTTP server. Verify the control plane node Ignition file URL. Replace <http_server_fqdn> with HTTP server's fully qualified domain name: USD curl -I http://<http_server_fqdn>:<port>/master.ign 1 1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found . To verify that the Ignition file was received by the control plane node query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files: USD grep -is 'master.ign' /var/log/httpd/access_log If the master Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded. If the Ignition file was not received, check that it exists on the serving host directly. Ensure that the appropriate file and web server permissions are in place. If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment. Review the console for the control plane node to determine if the mechanism is injecting the control plane node Ignition file correctly. Check the availability of the storage device assigned to the control plane node. Verify that the control plane node has been assigned an IP address from the DHCP server. Determine control plane node status. Query control plane node status: USD oc get nodes If one of the control plane nodes does not reach a Ready status, retrieve a detailed node description: USD oc describe node <master_node> Note It is not possible to run oc commands if an installation issue prevents the OpenShift Container Platform API from running or if the kubelet is not running yet on each node: Determine OVN-Kubernetes status. Review ovnkube-node daemon set status, in the openshift-ovn-kubernetes namespace: USD oc get daemonsets -n openshift-ovn-kubernetes If those resources are listed as Not found , review pods in the openshift-ovn-kubernetes namespace: USD oc get pods -n openshift-ovn-kubernetes Review logs relating to failed OpenShift Container Platform OVN-Kubernetes pods in the openshift-ovn-kubernetes namespace: USD oc logs <ovn-k_pod> -n openshift-ovn-kubernetes Determine cluster network configuration status. Review whether the cluster's network configuration exists: USD oc get network.config.openshift.io cluster -o yaml If the installer failed to create the network configuration, generate the Kubernetes manifests again and review message output: USD ./openshift-install create manifests Review the pod status in the openshift-network-operator namespace to determine whether the Cluster Network Operator (CNO) is running: USD oc get pods -n openshift-network-operator Gather network Operator pod logs from the openshift-network-operator namespace: USD oc logs pod/<network_operator_pod_name> -n openshift-network-operator Monitor kubelet.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity. Retrieve the logs using oc : USD oc adm node-logs --role=master -u kubelet If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . Retrieve crio.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity. Retrieve the logs using oc : USD oc adm node-logs --role=master -u crio If the API is not functional, review the logs using SSH instead: USD ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service Collect logs from specific subdirectories under /var/log/ on control plane nodes. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver/audit.log If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log : USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log Review control plane node container logs using SSH. List the containers: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a Retrieve a container's logs using crictl : USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id> If you experience control plane node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity. Test whether the MCO endpoint is available. Replace <cluster_name> with appropriate values: USD curl https://api-int.<cluster_name>:22623/config/master If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint is configured to run on port 22623. Verify that the MCO endpoint's DNS record is configured and resolves to the load balancer. Run a DNS lookup for the defined MCO endpoint name: USD dig api-int.<cluster_name> @<dns_server> Run a reverse lookup to the assigned MCO IP address on the load balancer: USD dig -x <load_balancer_mco_ip_address> @<dns_server> Verify that the MCO is functioning from the bootstrap node directly. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master System clock time must be synchronized between bootstrap, master, and worker nodes. Check each node's system clock reference time and time synchronization statistics: USD ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking Review certificate validity: USD openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text 7.1.9. Investigating etcd installation issues If you experience etcd issues during installation, you can check etcd pod status and collect etcd pod logs. You can also verify etcd DNS records and check DNS availability on control plane nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have the fully qualified domain names of the control plane nodes. Procedure Check the status of etcd pods. Review the status of pods in the openshift-etcd namespace: USD oc get pods -n openshift-etcd Review the status of pods in the openshift-etcd-operator namespace: USD oc get pods -n openshift-etcd-operator If any of the pods listed by the commands are not showing a Running or a Completed status, gather diagnostic information for the pod. Review events for the pod: USD oc describe pod/<pod_name> -n <namespace> Inspect the pod's logs: USD oc logs pod/<pod_name> -n <namespace> If the pod has more than one container, the preceding command will create an error, and the container names will be provided in the error message. Inspect logs for each container: USD oc logs pod/<pod_name> -c <container_name> -n <namespace> If the API is not functional, review etcd pod and container logs on each control plane node by using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values. List etcd pods on each control plane node: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd- For any pods not showing Ready status, inspect pod status in detail. Replace <pod_id> with the pod's ID listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id> List containers related to a pod: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>' For any containers not showing Ready status, inspect container status in detail. Replace <container_id> with container IDs listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id> Review the logs for any containers not showing a Ready status. Replace <container_id> with the container IDs listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id> Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . Validate primary and secondary DNS server connectivity from control plane nodes. 7.1.10. Investigating control plane node kubelet and API server issues To investigate control plane node kubelet and API server issues during installation, check DNS, DHCP, and load balancer functionality. Also, verify that certificates have not expired. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have the fully qualified domain names of the control plane nodes. Procedure Verify that the API server's DNS record directs the kubelet on control plane nodes to https://api-int.<cluster_name>.<base_domain>:6443 . Ensure that the record references the load balancer. Ensure that the load balancer's port 6443 definition references each control plane node. Check that unique control plane node hostnames have been provided by DHCP. Inspect the kubelet.service journald unit logs on each control plane node. Retrieve the logs using oc : USD oc adm node-logs --role=master -u kubelet If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . Check for certificate expiration messages in the control plane node kubelet logs. Retrieve the log using oc : USD oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired' If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service | grep -is 'x509: certificate has expired' 7.1.11. Investigating worker node installation issues If you experience worker node installation issues, you can review the worker node status. Collect kubelet.service , crio.service journald unit logs and the worker node container logs for visibility into the worker node agent, CRI-O container runtime and pod activity. Additionally, you can check the Ignition file and Machine API Operator functionality. If worker node postinstallation configuration fails, check Machine Config Operator (MCO) and DNS functionality. You can also verify system clock synchronization between the bootstrap, master, and worker nodes, and validate certificates. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have the fully qualified domain names of the bootstrap and worker nodes. If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server's fully qualified domain name and the port number. You must also have SSH access to the HTTP host. Note The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host. Procedure If you have access to the worker node's console, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console. Verify Ignition file configuration. If you are hosting Ignition configuration files by using an HTTP server. Verify the worker node Ignition file URL. Replace <http_server_fqdn> with HTTP server's fully qualified domain name: USD curl -I http://<http_server_fqdn>:<port>/worker.ign 1 1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found . To verify that the Ignition file was received by the worker node, query the HTTP server logs on the HTTP host. For example, if you are using an Apache web server to serve Ignition files: USD grep -is 'worker.ign' /var/log/httpd/access_log If the worker Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded. If the Ignition file was not received, check that it exists on the serving host directly. Ensure that the appropriate file and web server permissions are in place. If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment. Review the worker node's console to determine if the mechanism is injecting the worker node Ignition file correctly. Check the availability of the worker node's assigned storage device. Verify that the worker node has been assigned an IP address from the DHCP server. Determine worker node status. Query node status: USD oc get nodes Retrieve a detailed node description for any worker nodes not showing a Ready status: USD oc describe node <worker_node> Note It is not possible to run oc commands if an installation issue prevents the OpenShift Container Platform API from running or if the kubelet is not running yet on each node. Unlike control plane nodes, worker nodes are deployed and scaled using the Machine API Operator. Check the status of the Machine API Operator. Review Machine API Operator pod status: USD oc get pods -n openshift-machine-api If the Machine API Operator pod does not have a Ready status, detail the pod's events: USD oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api Inspect machine-api-operator container logs. The container runs within the machine-api-operator pod: USD oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator Also inspect kube-rbac-proxy container logs. The container also runs within the machine-api-operator pod: USD oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy Monitor kubelet.service journald unit logs on worker nodes, after they have booted. This provides visibility into worker node agent activity. Retrieve the logs using oc : USD oc adm node-logs --role=worker -u kubelet If the API is not functional, review the logs using SSH instead. Replace <worker-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . Retrieve crio.service journald unit logs on worker nodes, after they have booted. This provides visibility into worker node CRI-O container runtime activity. Retrieve the logs using oc : USD oc adm node-logs --role=worker -u crio If the API is not functional, review the logs using SSH instead: USD ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service Collect logs from specific subdirectories under /var/log/ on worker nodes. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/sssd/ on all worker nodes: USD oc adm node-logs --role=worker --path=sssd Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/sssd/sssd.log contents from all worker nodes: USD oc adm node-logs --role=worker --path=sssd/sssd.log If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/sssd/sssd.log : USD ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log Review worker node container logs using SSH. List the containers: USD ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a Retrieve a container's logs using crictl : USD ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id> If you experience worker node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity. Test whether the MCO endpoint is available. Replace <cluster_name> with appropriate values: USD curl https://api-int.<cluster_name>:22623/config/worker If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint is configured to run on port 22623. Verify that the MCO endpoint's DNS record is configured and resolves to the load balancer. Run a DNS lookup for the defined MCO endpoint name: USD dig api-int.<cluster_name> @<dns_server> Run a reverse lookup to the assigned MCO IP address on the load balancer: USD dig -x <load_balancer_mco_ip_address> @<dns_server> Verify that the MCO is functioning from the bootstrap node directly. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker System clock time must be synchronized between bootstrap, master, and worker nodes. Check each node's system clock reference time and time synchronization statistics: USD ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking Review certificate validity: USD openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text 7.1.12. Querying Operator status after installation You can check Operator status at the end of an installation. Retrieve diagnostic data for Operators that do not become available. Review logs for any Operator pods that are listed as Pending or have an error status. Validate base images used by problematic pods. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Check that cluster Operators are all available at the end of an installation. USD oc get clusteroperators Verify that all of the required certificate signing requests (CSRs) are approved. Some nodes might not move to a Ready status and some cluster Operators might not become available if there are pending CSRs. Check the status of the CSRs and ensure that you see a client and server request with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... 1 A client request CSR. 2 A server request CSR. In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster kube-controller-manager . Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve View Operator events: USD oc describe clusteroperator <operator_name> Review Operator pod status within the Operator's namespace: USD oc get pods -n <operator_namespace> Obtain a detailed description for pods that do not have Running status: USD oc describe pod/<operator_pod_name> -n <operator_namespace> Inspect pod logs: USD oc logs pod/<operator_pod_name> -n <operator_namespace> When experiencing pod base image related issues, review base image status. Obtain details of the base image used by a problematic pod: USD oc get pod -o "jsonpath={range .status.containerStatuses[*]}{.name}{'\t'}{.state}{'\t'}{.image}{'\n'}{end}" <operator_pod_name> -n <operator_namespace> List base image release information: USD oc adm release info <image_path>:<tag> --commits 7.1.13. Gathering logs from a failed installation If you gave an SSH key to your installation program, you can gather data about your failed installation. Note You use a different command to gather logs about an unsuccessful installation than to gather logs from a running cluster. If you must gather logs from a running cluster, use the oc adm must-gather command. Prerequisites Your OpenShift Container Platform installation failed before the bootstrap process finished. The bootstrap node is running and accessible through SSH. The ssh-agent process is active on your computer, and you provided the same SSH key to both the ssh-agent process and the installation program. If you tried to install a cluster on infrastructure that you provisioned, you must have the fully qualified domain names of the bootstrap and control plane nodes. Procedure Generate the commands that are required to obtain the installation logs from the bootstrap and control plane machines: If you used installer-provisioned infrastructure, change to the directory that contains the installation program and run the following command: USD ./openshift-install gather bootstrap --dir <installation_directory> 1 1 installation_directory is the directory you specified when you ran ./openshift-install create cluster . This directory contains the OpenShift Container Platform definition files that the installation program creates. For installer-provisioned infrastructure, the installation program stores information about the cluster, so you do not specify the hostnames or IP addresses. If you used infrastructure that you provisioned yourself, change to the directory that contains the installation program and run the following command: USD ./openshift-install gather bootstrap --dir <installation_directory> \ 1 --bootstrap <bootstrap_address> \ 2 --master <master_1_address> \ 3 --master <master_2_address> \ 4 --master <master_3_address> 5 1 For installation_directory , specify the same directory you specified when you ran ./openshift-install create cluster . This directory contains the OpenShift Container Platform definition files that the installation program creates. 2 <bootstrap_address> is the fully qualified domain name or IP address of the cluster's bootstrap machine. 3 4 5 For each control plane, or master, machine in your cluster, replace <master_*_address> with its fully qualified domain name or IP address. Note A default cluster contains three control plane machines. List all of your control plane machines as shown, no matter how many your cluster uses. Example output INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here "<installation_directory>/log-bundle-<timestamp>.tar.gz" If you open a Red Hat support case about your installation failure, include the compressed logs in the case. 7.1.14. Additional resources See Installation process for more details on OpenShift Container Platform installation types and process. 7.2. Verifying node health 7.2.1. Reviewing node status, resource usage, and configuration Review cluster node health status, resource consumption statistics, and node logs. Additionally, query kubelet status on individual nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the name, status, and role for all nodes in the cluster: USD oc get nodes Summarize CPU and memory usage for each node within the cluster: USD oc adm top nodes Summarize CPU and memory usage for a specific node: USD oc adm top node my-node 7.2.2. Querying the kubelet's status on a node You can review cluster node health status, resource consumption statistics, and node logs. Additionally, you can query kubelet status on individual nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure The kubelet is managed using a systemd service on each node. Review the kubelet's status by querying the kubelet systemd service within a debug pod. Start a debug pod for a node: USD oc debug node/my-node Note If you are running oc debug on a control plane node, you can find administrative kubeconfig files in the /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs directory. Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. Check whether the kubelet systemd service is active on the node: # systemctl is-active kubelet Output a more detailed kubelet.service status summary: # systemctl status kubelet 7.2.3. Querying cluster node journal logs You can gather journald unit logs and other logs within /var/log on individual cluster nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Your API service is still functional. You have SSH access to your hosts. Procedure Query kubelet journald unit logs from OpenShift Container Platform cluster nodes. The following example queries control plane nodes only: USD oc adm node-logs --role=master -u kubelet 1 1 Replace kubelet as appropriate to query other unit logs. Collect logs from specific subdirectories under /var/log/ on cluster nodes. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver/audit.log If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log : USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . 7.3. Troubleshooting CRI-O container runtime issues 7.3.1. About CRI-O container runtime engine CRI-O is a Kubernetes-native container engine implementation that integrates closely with the operating system to deliver an efficient and optimized Kubernetes experience. The CRI-O container engine runs as a systemd service on each OpenShift Container Platform cluster node. When container runtime issues occur, verify the status of the crio systemd service on each node. Gather CRI-O journald unit logs from nodes that have container runtime issues. 7.3.2. Verifying CRI-O runtime engine status You can verify CRI-O container runtime engine status on each cluster node. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Review CRI-O status by querying the crio systemd service on a node, within a debug pod. Start a debug pod for a node: USD oc debug node/my-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. Check whether the crio systemd service is active on the node: # systemctl is-active crio Output a more detailed crio.service status summary: # systemctl status crio.service 7.3.3. Gathering CRI-O journald unit logs If you experience CRI-O issues, you can obtain CRI-O journald unit logs from a node. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). You have the fully qualified domain names of the control plane or control plane machines. Procedure Gather CRI-O journald unit logs. The following example collects logs from all control plane nodes (within the cluster: USD oc adm node-logs --role=master -u crio Gather CRI-O journald unit logs from a specific node: USD oc adm node-logs <node_name> -u crio If the API is not functional, review the logs using SSH instead. Replace <node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . 7.3.4. Cleaning CRI-O storage You can manually clear the CRI-O ephemeral storage if you experience the following issues: A node cannot run any pods and this error appears: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory You cannot create a new container on a working node and the "can't stat lower layer" error appears: can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks. Your node is in the NotReady state after a cluster upgrade or if you attempt to reboot it. The container runtime implementation ( crio ) is not working properly. You are unable to start a debug shell on the node using oc debug node/<node_name> because the container runtime instance ( crio ) is not working. Follow this process to completely wipe the CRI-O storage and resolve the errors. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Use cordon on the node. This is to avoid any workload getting scheduled if the node gets into the Ready status. You will know that scheduling is disabled when SchedulingDisabled is in your Status section: USD oc adm cordon <node_name> Drain the node as the cluster-admin user: USD oc adm drain <node_name> --ignore-daemonsets --delete-emptydir-data Note The terminationGracePeriodSeconds attribute of a pod or pod template controls the graceful termination period. This attribute defaults at 30 seconds, but can be customized for each application as necessary. If set to more than 90 seconds, the pod might be marked as SIGKILLed and fail to terminate successfully. When the node returns, connect back to the node via SSH or Console. Then connect to the root user: USD ssh [email protected] USD sudo -i Manually stop the kubelet: # systemctl stop kubelet Stop the containers and pods: Use the following command to stop the pods that are not in the HostNetwork . They must be removed first because their removal relies on the networking plugin pods, which are in the HostNetwork . .. for pod in USD(crictl pods -q); do if [[ "USD(crictl inspectp USDpod | jq -r .status.linux.namespaces.options.network)" != "NODE" ]]; then crictl rmp -f USDpod; fi; done Stop all other pods: # crictl rmp -fa Manually stop the crio services: # systemctl stop crio After you run those commands, you can completely wipe the ephemeral storage: # crio wipe -f Start the crio and kubelet service: # systemctl start crio # systemctl start kubelet You will know if the clean up worked if the crio and kubelet services are started, and the node is in the Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.30.3 Mark the node schedulable. You will know that the scheduling is enabled when SchedulingDisabled is no longer in status: USD oc adm uncordon <node_name> Example output NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.30.3 7.4. Troubleshooting operating system issues OpenShift Container Platform runs on RHCOS. You can follow these procedures to troubleshoot problems related to the operating system. 7.4.1. Investigating kernel crashes The kdump service, included in the kexec-tools package, provides a crash-dumping mechanism. You can use this service to save the contents of a system's memory for later analysis. The x86_64 architecture supports kdump in General Availability (GA) status, whereas other architectures support kdump in Technology Preview (TP) status. The following table provides details about the support level of kdump for different architectures. Table 7.1. Kdump support in RHCOS Architecture Support level x86_64 GA aarch64 TP s390x TP ppc64le TP Important Kdump support, for the preceding three architectures in the table, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 7.4.1.1. Enabling kdump RHCOS ships with the kexec-tools package, but manual configuration is required to enable the kdump service. Procedure Perform the following steps to enable kdump on RHCOS. To reserve memory for the crash kernel during the first kernel booting, provide kernel arguments by entering the following command: # rpm-ostree kargs --append='crashkernel=256M' Note For the ppc64le platform, the recommended value for crashkernel is crashkernel=2G-4G:384M,4G-16G:512M,16G-64G:1G,64G-128G:2G,128G-:4G . Optional: To write the crash dump over the network or to some other location, rather than to the default local /var/crash location, edit the /etc/kdump.conf configuration file. Note If your node uses LUKS-encrypted devices, you must use network dumps as kdump does not support saving crash dumps to LUKS-encrypted devices. For details on configuring the kdump service, see the comments in /etc/sysconfig/kdump , /etc/kdump.conf , and the kdump.conf manual page. Also refer to the RHEL kdump documentation for further information on configuring the dump target. Important If you have multipathing enabled on your primary disk, the dump target must be either an NFS or SSH server and you must exclude the multipath module from your /etc/kdump.conf configuration file. Enable the kdump systemd service. # systemctl enable kdump.service Reboot your system. # systemctl reboot Ensure that kdump has loaded a crash kernel by checking that the kdump.service systemd service has started and exited successfully and that the command, cat /sys/kernel/kexec_crash_loaded , prints the value 1 . 7.4.1.2. Enabling kdump on day-1 The kdump service is intended to be enabled per node to debug kernel problems. Because there are costs to having kdump enabled, and these costs accumulate with each additional kdump-enabled node, it is recommended that the kdump service only be enabled on each node as needed. Potential costs of enabling the kdump service on each node include: Less available RAM due to memory being reserved for the crash kernel. Node unavailability while the kernel is dumping the core. Additional storage space being used to store the crash dumps. If you are aware of the downsides and trade-offs of having the kdump service enabled, it is possible to enable kdump in a cluster-wide fashion. Although machine-specific machine configs are not yet supported, you can use a systemd unit in a MachineConfig object as a day-1 customization and have kdump enabled on all nodes in the cluster. You can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. Note See "Customizing nodes" in the Installing Installation configuration section for more information and examples on how to use Ignition configs. Procedure Create a MachineConfig object for cluster-wide configuration: Create a Butane config file, 99-worker-kdump.bu , that configures and enables kdump: variant: openshift version: 4.17.0 metadata: name: 99-worker-kdump 1 labels: machineconfiguration.openshift.io/role: worker 2 openshift: kernel_arguments: 3 - crashkernel=256M storage: files: - path: /etc/kdump.conf 4 mode: 0644 overwrite: true contents: inline: | path /var/crash core_collector makedumpfile -l --message-level 7 -d 31 - path: /etc/sysconfig/kdump 5 mode: 0644 overwrite: true contents: inline: | KDUMP_COMMANDLINE_REMOVE="hugepages hugepagesz slub_debug quiet log_buf_len swiotlb" KDUMP_COMMANDLINE_APPEND="irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable" 6 KEXEC_ARGS="-s" KDUMP_IMG="vmlinuz" systemd: units: - name: kdump.service enabled: true 1 2 Replace worker with master in both locations when creating a MachineConfig object for control plane nodes. 3 Provide kernel arguments to reserve memory for the crash kernel. You can add other kernel arguments if necessary. For the ppc64le platform, the recommended value for crashkernel is crashkernel=2G-4G:384M,4G-16G:512M,16G-64G:1G,64G-128G:2G,128G-:4G . 4 If you want to change the contents of /etc/kdump.conf from the default, include this section and modify the inline subsection accordingly. 5 If you want to change the contents of /etc/sysconfig/kdump from the default, include this section and modify the inline subsection accordingly. 6 For the ppc64le platform, replace nr_cpus=1 with maxcpus=1 , which is not supported on this platform. Note To export the dumps to NFS targets, some kernel modules must be explicitly added to the configuration file: Example /etc/kdump.conf file nfs server.example.com:/export/cores core_collector makedumpfile -l --message-level 7 -d 31 extra_bins /sbin/mount.nfs extra_modules nfs nfsv3 nfs_layout_nfsv41_files blocklayoutdriver nfs_layout_flexfiles nfs_layout_nfsv41_files Use Butane to generate a machine config YAML file, 99-worker-kdump.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-kdump.bu -o 99-worker-kdump.yaml Put the YAML file into the <installation_directory>/manifests/ directory during cluster setup. You can also create this MachineConfig object after cluster setup with the YAML file: USD oc create -f 99-worker-kdump.yaml 7.4.1.3. Testing the kdump configuration See the Testing the kdump configuration section in the RHEL documentation for kdump. 7.4.1.4. Analyzing a core dump See the Analyzing a core dump section in the RHEL documentation for kdump. Note It is recommended to perform vmcore analysis on a separate RHEL system. Additional resources Setting up kdump in RHEL Linux kernel documentation for kdump kdump.conf(5) - a manual page for the /etc/kdump.conf configuration file containing the full documentation of available options kexec(8) - a manual page for the kexec package Red Hat Knowledgebase article regarding kexec and kdump 7.4.2. Debugging Ignition failures If a machine cannot be provisioned, Ignition fails and RHCOS will boot into the emergency shell. Use the following procedure to get debugging information. Procedure Run the following command to show which service units failed: USD systemctl --failed Optional: Run the following command on an individual service unit to find out more information: USD journalctl -u <unit>.service 7.5. Troubleshooting network issues 7.5.1. How the network interface is selected For installations on bare metal or with virtual machines that have more than one network interface controller (NIC), the NIC that OpenShift Container Platform uses for communication with the Kubernetes API server is determined by the nodeip-configuration.service service unit that is run by systemd when the node boots. The nodeip-configuration.service selects the IP from the interface associated with the default route. After the nodeip-configuration.service service determines the correct NIC, the service creates the /etc/systemd/system/kubelet.service.d/20-nodenet.conf file. The 20-nodenet.conf file sets the KUBELET_NODE_IP environment variable to the IP address that the service selected. When the kubelet service starts, it reads the value of the environment variable from the 20-nodenet.conf file and sets the IP address as the value of the --node-ip kubelet command-line argument. As a result, the kubelet service uses the selected IP address as the node IP address. If hardware or networking is reconfigured after installation, or if there is a networking layout where the node IP should not come from the default route interface, it is possible for the nodeip-configuration.service service to select a different NIC after a reboot. In some cases, you might be able to detect that a different NIC is selected by reviewing the INTERNAL-IP column in the output from the oc get nodes -o wide command. If network communication is disrupted or misconfigured because a different NIC is selected, you might receive the following error: EtcdCertSignerControllerDegraded . You can create a hint file that includes the NODEIP_HINT variable to override the default IP selection logic. For more information, see Optional: Overriding the default node IP selection logic. 7.5.1.1. Optional: Overriding the default node IP selection logic To override the default IP selection logic, you can create a hint file that includes the NODEIP_HINT variable to override the default IP selection logic. Creating a hint file allows you to select a specific node IP address from the interface in the subnet of the IP address specified in the NODEIP_HINT variable. For example, if a node has two interfaces, eth0 with an address of 10.0.0.10/24 , and eth1 with an address of 192.0.2.5/24 , and the default route points to eth0 ( 10.0.0.10 ),the node IP address would normally use the 10.0.0.10 IP address. Users can configure the NODEIP_HINT variable to point at a known IP in the subnet, for example, a subnet gateway such as 192.0.2.1 so that the other subnet, 192.0.2.0/24 , is selected. As a result, the 192.0.2.5 IP address on eth1 is used for the node. The following procedure shows how to override the default node IP selection logic. Procedure Add a hint file to your /etc/default/nodeip-configuration file, for example: NODEIP_HINT=192.0.2.1 Important Do not use the exact IP address of a node as a hint, for example, 192.0.2.5 . Using the exact IP address of a node causes the node using the hint IP address to fail to configure correctly. The IP address in the hint file is only used to determine the correct subnet. It will not receive traffic as a result of appearing in the hint file. Generate the base-64 encoded content by running the following command: USD echo -n 'NODEIP_HINT=192.0.2.1' | base64 -w0 Example output Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx== Activate the hint by creating a machine config manifest for both master and worker roles before deploying the cluster: 99-nodeip-hint-master.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-nodeip-hint-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration 1 Replace <encoded_contents> with the base64-encoded content of the /etc/default/nodeip-configuration file, for example, Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx== . Note that a space is not acceptable after the comma and before the encoded content. 99-nodeip-hint-worker.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-nodeip-hint-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration 1 Replace <encoded_contents> with the base64-encoded content of the /etc/default/nodeip-configuration file, for example, Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx== . Note that a space is not acceptable after the comma and before the encoded content. Save the manifest to the directory where you store your cluster configuration, for example, ~/clusterconfigs . Deploy the cluster. 7.5.1.2. Configuring OVN-Kubernetes to use a secondary OVS bridge You can create an additional or secondary Open vSwitch (OVS) bridge, br-ex1 , that OVN-Kubernetes manages and the Multiple External Gateways (MEG) implementation uses for defining external gateways for an OpenShift Container Platform node. You can define a MEG in an AdminPolicyBasedExternalRoute custom resource (CR). The MEG implementation provides a pod with access to multiple gateways, equal-cost multipath (ECMP) routes, and the Bidirectional Forwarding Detection (BFD) implementation. Consider a use case for pods impacted by the Multiple External Gateways (MEG) feature and you want to egress traffic to a different interface, for example br-ex1 , on a node. Egress traffic for pods not impacted by MEG get routed to the default OVS br-ex bridge. Important Currently, MEG is unsupported for use with other egress features, such as egress IP, egress firewalls, or egress routers. Attempting to use MEG with egress features like egress IP can result in routing and traffic flow conflicts. This occurs because of how OVN-Kubernetes handles routing and source network address translation (SNAT). This results in inconsistent routing and might break connections in some environments where the return path must patch the incoming path. You must define the additional bridge in an interface definition of a machine configuration manifest file. The Machine Config Operator uses the manifest to create a new file at /etc/ovnk/extra_bridge on the host. The new file includes the name of the network interface that the additional OVS bridge configures for a node. After you create and edit the manifest file, the Machine Config Operator completes tasks in the following order: Drains nodes in singular order based on the selected machine configuration pool. Injects Ignition configuration files into each node, so that each node receives the additional br-ex1 bridge network configuration. Verify that the br-ex MAC address matches the MAC address for the interface that br-ex uses for the network connection. Executes the configure-ovs.sh shell script that references the new interface definition. Adds br-ex and br-ex1 to the host node. Uncordons the nodes. Note After all the nodes return to the Ready state and the OVN-Kubernetes Operator detects and configures br-ex and br-ex1 , the Operator applies the k8s.ovn.org/l3-gateway-config annotation to each node. For more information about useful situations for the additional br-ex1 bridge and a situation that always requires the default br-ex bridge, see "Configuration for a localnet topology". Procedure Optional: Create an interface connection that your additional bridge, br-ex1 , can use by completing the following steps. The example steps show the creation of a new bond and its dependent interfaces that are all defined in a machine configuration manifest file. The additional bridge uses the MachineConfig object to form a additional bond interface. Important Do not use the Kubernetes NMState Operator to define or a NodeNetworkConfigurationPolicy (NNCP) manifest file to define the additional interface. Also ensure that the additional interface or sub-interfaces when defining a bond interface are not used by an existing br-ex OVN Kubernetes network deployment. Create the following interface definition files. These files get added to a machine configuration manifest file so that host nodes can access the definition files. Example of the first interface definition file that is named eno1.config [connection] id=eno1 type=ethernet interface-name=eno1 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20 Example of the second interface definition file that is named eno2.config [connection] id=eno2 type=ethernet interface-name=eno2 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20 Example of the second bond interface definition file that is named bond1.config [connection] id=bond1 type=bond interface-name=bond1 autoconnect=true connection.autoconnect-slaves=1 autoconnect-priority=20 [bond] mode=802.3ad miimon=100 xmit_hash_policy="layer3+4" [ipv4] method=auto Convert the definition files to Base64 encoded strings by running the following command: USD base64 <directory_path>/en01.config USD base64 <directory_path>/eno2.config USD base64 <directory_path>/bond1.config Prepare the environment variables. Replace <machine_role> with the node role, such as worker , and replace <interface_name> with the name of your additional br-ex bridge name. USD export ROLE=<machine_role> Define each interface definition in a machine configuration manifest file: Example of a machine configuration file with definitions added for bond1 , eno1 , and en02 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-USD{ROLE}-sec-bridge-cni spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:;base64,<base-64-encoded-contents-for-bond1.conf> path: /etc/NetworkManager/system-connections/bond1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno1.conf> path: /etc/NetworkManager/system-connections/eno1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno2.conf> path: /etc/NetworkManager/system-connections/eno2.nmconnection filesystem: root mode: 0600 # ... Create a machine configuration manifest file for configuring the network plugin by entering the following command in your terminal: USD oc create -f <machine_config_file_name> Create an Open vSwitch (OVS) bridge, br-ex1 , on nodes by using the OVN-Kubernetes network plugin to create an extra_bridge file`. Ensure that you save the file in the /etc/ovnk/extra_bridge path of the host. The file must state the interface name that supports the additional bridge and not the default interface that supports br-ex , which holds the primary IP address of the node. Example configuration for the extra_bridge file, /etc/ovnk/extra_bridge , that references a additional interface bond1 Create a machine configuration manifest file that defines the existing static interface that hosts br-ex1 on any nodes restarted on your cluster: Example of a machine configuration file that defines bond1 as the interface for hosting br-ex1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-extra-bridge spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/ovnk/extra_bridge mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond1 filesystem: root Apply the machine-configuration to your selected nodes: USD oc create -f <machine_config_file_name> Optional: You can override the br-ex selection logic for nodes by creating a machine configuration file that in turn creates a /var/lib/ovnk/iface_default_hint resource. Note The resource lists the name of the interface that br-ex selects for your cluster. By default, br-ex selects the primary interface for a node based on boot order and the IP address subnet in the machine network. Certain machine network configurations might require that br-ex continues to select the default interfaces or bonds for a host node. Create a machine configuration file on the host node to override the default interface. Important Only create this machine configuration file for the purposes of changing the br-ex selection logic. Using this file to change the IP addresses of existing nodes in your cluster is not supported. Example of a machine configuration file that overrides the default interface apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-br-ex-override spec: config: ignition: version: 3.2.0 storage: files: - path: /var/lib/ovnk/iface_default_hint mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond0 1 filesystem: root 1 Ensure bond0 exists on the node before you apply the machine configuration file to the node. Before you apply the configuration to all new nodes in your cluster, reboot the host node to verify that br-ex selects the intended interface and does not conflict with the new interfaces that you defined on br-ex1 . Apply the machine configuration file to all new nodes in your cluster: USD oc create -f <machine_config_file_name> Verification Identify the IP addresses of nodes with the exgw-ip-addresses label in your cluster to verify that the nodes use the additional bridge instead of the default bridge: USD oc get nodes -o json | grep --color exgw-ip-addresses Example output "k8s.ovn.org/l3-gateway-config": \"exgw-ip-address\":\"172.xx.xx.yy/24\",\"-hops\":[\"xx.xx.xx.xx\"], Observe that the additional bridge exists on target nodes by reviewing the network interface names on the host node: USD oc debug node/<node_name> -- chroot /host sh -c "ip a | grep mtu | grep br-ex" Example output Starting pod/worker-1-debug ... To use host binaries, run `chroot /host` # ... 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 6: br-ex1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 Optional: If you use /var/lib/ovnk/iface_default_hint , check that the MAC address of br-ex matches the MAC address of the primary selected interface: USD oc debug node/<node_name> -- chroot /host sh -c "ip a | grep -A1 -E 'br-ex|bond0' Example output that shows the primary interface for br-ex as bond0 Starting pod/worker-1-debug ... To use host binaries, run `chroot /host` # ... sh-5.1# ip a | grep -A1 -E 'br-ex|bond0' 2: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff -- 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff inet 10.xx.xx.xx/21 brd 10.xx.xx.255 scope global dynamic noprefixroute br-ex Additional resources Configure an external gateway on the default network 7.5.2. Troubleshooting Open vSwitch issues To troubleshoot some Open vSwitch (OVS) issues, you might need to configure the log level to include more information. If you modify the log level on a node temporarily, be aware that you can receive log messages from the machine config daemon on the node like the following example: E0514 12:47:17.998892 2281 daemon.go:1350] content mismatch for file /etc/systemd/system/ovs-vswitchd.service: [Unit] To avoid the log messages related to the mismatch, revert the log level change after you complete your troubleshooting. 7.5.2.1. Configuring the Open vSwitch log level temporarily For short-term troubleshooting, you can configure the Open vSwitch (OVS) log level temporarily. The following procedure does not require rebooting the node. In addition, the configuration change does not persist whenever you reboot the node. After you perform this procedure to change the log level, you can receive log messages from the machine config daemon that indicate a content mismatch for the ovs-vswitchd.service . To avoid the log messages, repeat this procedure and set the log level to the original value. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Start a debug pod for a node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell. The debug pod mounts the root file system from the host in /host within the pod. By changing the root directory to /host , you can run binaries from the host file system: # chroot /host View the current syslog level for OVS modules: # ovs-appctl vlog/list The following example output shows the log level for syslog set to info . Example output console syslog file ------- ------ ------ backtrace OFF INFO INFO bfd OFF INFO INFO bond OFF INFO INFO bridge OFF INFO INFO bundle OFF INFO INFO bundles OFF INFO INFO cfm OFF INFO INFO collectors OFF INFO INFO command_line OFF INFO INFO connmgr OFF INFO INFO conntrack OFF INFO INFO conntrack_tp OFF INFO INFO coverage OFF INFO INFO ct_dpif OFF INFO INFO daemon OFF INFO INFO daemon_unix OFF INFO INFO dns_resolve OFF INFO INFO dpdk OFF INFO INFO ... Specify the log level in the /etc/systemd/system/ovs-vswitchd.service.d/10-ovs-vswitchd-restart.conf file: Restart=always ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /var/lib/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /etc/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /run/openvswitch' ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg In the preceding example, the log level is set to dbg . Change the last two lines by setting syslog:<log_level> to off , emer , err , warn , info , or dbg . The off log level filters out all log messages. Restart the service: # systemctl daemon-reload # systemctl restart ovs-vswitchd 7.5.2.2. Configuring the Open vSwitch log level permanently For long-term changes to the Open vSwitch (OVS) log level, you can change the log level permanently. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a file, such as 99-change-ovs-loglevel.yaml , with a MachineConfig object like the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master 1 name: 99-change-ovs-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - dropins: - contents: | [Service] ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg 2 ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg name: 20-ovs-vswitchd-restart.conf name: ovs-vswitchd.service 1 After you perform this procedure to configure control plane nodes, repeat the procedure and set the role to worker to configure worker nodes. 2 Set the syslog:<log_level> value. Log levels are off , emer , err , warn , info , or dbg . Setting the value to off filters out all log messages. Apply the machine config: USD oc apply -f 99-change-ovs-loglevel.yaml Additional resources Understanding the Machine Config Operator Checking machine config pool status 7.5.2.3. Displaying Open vSwitch logs Use the following procedure to display Open vSwitch (OVS) logs. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Run one of the following commands: Display the logs by using the oc command from outside the cluster: USD oc adm node-logs <node_name> -u ovs-vswitchd Display the logs after logging on to a node in the cluster: # journalctl -b -f -u ovs-vswitchd.service One way to log on to a node is by using the oc debug node/<node_name> command. 7.6. Troubleshooting Operator issues Operators are a method of packaging, deploying, and managing an OpenShift Container Platform application. They act like an extension of the software vendor's engineering team, watching over an OpenShift Container Platform environment and using its current state to make decisions in real time. Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, such as skipping a software backup process to save time. OpenShift Container Platform 4.17 includes a default set of Operators that are required for proper functioning of the cluster. These default Operators are managed by the Cluster Version Operator (CVO). As a cluster administrator, you can install application Operators from the OperatorHub using the OpenShift Container Platform web console or the CLI. You can then subscribe the Operator to one or more namespaces to make it available for developers on your cluster. Application Operators are managed by Operator Lifecycle Manager (OLM). If you experience Operator issues, verify Operator subscription status. Check Operator pod health across the cluster and gather Operator logs for diagnosis. 7.6.1. Operator subscription condition types Subscriptions can report the following condition types: Table 7.2. Subscription condition types Condition Description CatalogSourcesUnhealthy Some or all of the catalog sources to be used in resolution are unhealthy. InstallPlanMissing An install plan for a subscription is missing. InstallPlanPending An install plan for a subscription is pending installation. InstallPlanFailed An install plan for a subscription has failed. ResolutionFailed The dependency resolution for a subscription has failed. Note Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object. Additional resources Catalog health requirements 7.6.2. Viewing Operator subscription status by using the CLI You can view Operator subscription status by using the CLI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List Operator subscriptions: USD oc get subs -n <operator_namespace> Use the oc describe command to inspect a Subscription resource: USD oc describe sub <subscription_name> -n <operator_namespace> In the command output, find the Conditions section for the status of Operator subscription condition types. In the following example, the CatalogSourcesUnhealthy condition type has a status of false because all available catalog sources are healthy: Example output Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription # ... Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy # ... Note Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object. 7.6.3. Viewing Operator catalog source status by using the CLI You can view the status of an Operator catalog source by using the CLI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the catalog sources in a namespace. For example, you can check the openshift-marketplace namespace, which is used for cluster-wide catalog sources: USD oc get catalogsources -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m Use the oc describe command to get more details and status about a catalog source: USD oc describe catalogsource example-catalog -n openshift-marketplace Example output Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource # ... Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace # ... In the preceding example output, the last observed state is TRANSIENT_FAILURE . This state indicates that there is a problem establishing a connection for the catalog source. List the pods in the namespace where your catalog source was created: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the example-catalog-bwt8z pod is ImagePullBackOff . This status indicates that there is an issue pulling the catalog source's index image. Use the oc describe command to inspect a pod for more detailed information: USD oc describe pod example-catalog-bwt8z -n openshift-marketplace Example output Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull In the preceding example output, the error messages indicate that the catalog source's index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials. Additional resources Operator Lifecycle Manager concepts and resources Catalog source gRPC documentation: States of Connectivity Accessing images for Operators from private registries 7.6.4. Querying Operator pod status You can list Operator pods within a cluster and their status. You can also collect a detailed Operator pod summary. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure List Operators running in the cluster. The output includes Operator version, availability, and up-time information: USD oc get clusteroperators List Operator pods running in the Operator's namespace, plus pod status, restarts, and age: USD oc get pod -n <operator_namespace> Output a detailed Operator pod summary: USD oc describe pod <operator_pod_name> -n <operator_namespace> If an Operator issue is node-specific, query Operator container status on that node. Start a debug pod for the node: USD oc debug node/my-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. List details about the node's containers, including state and associated pod IDs: # crictl ps List information about a specific Operator container on the node. The following example lists information about the network-operator container: # crictl ps --name network-operator Exit from the debug shell. 7.6.5. Gathering Operator logs If you experience Operator issues, you can gather detailed diagnostic information from Operator pod logs. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). You have the fully qualified domain names of the control plane or control plane machines. Procedure List the Operator pods that are running in the Operator's namespace, plus the pod status, restarts, and age: USD oc get pods -n <operator_namespace> Review logs for an Operator pod: USD oc logs pod/<pod_name> -n <operator_namespace> If an Operator pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container: USD oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace> If the API is not functional, review Operator pod and container logs on each control plane node by using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values. List pods on each control plane node: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods For any Operator pods not showing a Ready status, inspect the pod's status in detail. Replace <operator_pod_id> with the Operator pod's ID listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id> List containers related to an Operator pod: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id> For any Operator container not showing a Ready status, inspect the container's status in detail. Replace <container_id> with a container ID listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id> Review the logs for any Operator containers not showing a Ready status. Replace <container_id> with a container ID listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id> Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . 7.6.6. Disabling the Machine Config Operator from automatically rebooting When configuration changes are made by the Machine Config Operator (MCO), Red Hat Enterprise Linux CoreOS (RHCOS) must reboot for the changes to take effect. Whether the configuration change is automatic or manual, an RHCOS node reboots automatically unless it is paused. Note The following modifications do not trigger a node reboot: When the MCO detects any of the following changes, it applies the update without draining or rebooting the node: Changes to the SSH key in the spec.config.passwd.users.sshAuthorizedKeys parameter of a machine config. Changes to the global pull secret or pull secret in the openshift-config namespace. Automatic rotation of the /etc/kubernetes/kubelet-ca.crt certificate authority (CA) by the Kubernetes API Server Operator. When the MCO detects changes to the /etc/containers/registries.conf file, such as adding or editing an ImageDigestMirrorSet , ImageTagMirrorSet , or ImageContentSourcePolicy object, it drains the corresponding nodes, applies the changes, and uncordons the nodes. The node drain does not happen for the following changes: The addition of a registry with the pull-from-mirror = "digest-only" parameter set for each mirror. The addition of a mirror with the pull-from-mirror = "digest-only" parameter set in a registry. The addition of items to the unqualified-search-registries list. To avoid unwanted disruptions, you can modify the machine config pool (MCP) to prevent automatic rebooting after the Operator makes changes to the machine config. 7.6.6.1. Disabling the Machine Config Operator from automatically rebooting by using the console To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can use the OpenShift Container Platform web console to modify the machine config pool (MCP) to prevent the MCO from making any changes to nodes in that pool. This prevents any reboots that would normally be part of the MCO update process. Note See second NOTE in Disabling the Machine Config Operator from automatically rebooting . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure To pause or unpause automatic MCO update rebooting: Pause the autoreboot process: Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Click Compute MachineConfigPools . On the MachineConfigPools page, click either master or worker , depending upon which nodes you want to pause rebooting for. On the master or worker page, click YAML . In the YAML, update the spec.paused field to true . Sample MachineConfigPool object apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool # ... spec: # ... paused: true 1 # ... 1 Update the spec.paused field to true to pause rebooting. To verify that the MCP is paused, return to the MachineConfigPools page. On the MachineConfigPools page, the Paused column reports True for the MCP you modified. If the MCP has pending changes while paused, the Updated column is False and Updating is False . When Updated is True and Updating is False , there are no pending changes. Important If there are pending changes (where both the Updated and Updating columns are False ), it is recommended to schedule a maintenance window for a reboot as early as possible. Use the following steps for unpausing the autoreboot process to apply the changes that were queued since the last reboot. Unpause the autoreboot process: Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Click Compute MachineConfigPools . On the MachineConfigPools page, click either master or worker , depending upon which nodes you want to pause rebooting for. On the master or worker page, click YAML . In the YAML, update the spec.paused field to false . Sample MachineConfigPool object apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool # ... spec: # ... paused: false 1 # ... 1 Update the spec.paused field to false to allow rebooting. Note By unpausing an MCP, the MCO applies all paused changes reboots Red Hat Enterprise Linux CoreOS (RHCOS) as needed. To verify that the MCP is paused, return to the MachineConfigPools page. On the MachineConfigPools page, the Paused column reports False for the MCP you modified. If the MCP is applying any pending changes, the Updated column is False and the Updating column is True . When Updated is True and Updating is False , there are no further changes being made. 7.6.6.2. Disabling the Machine Config Operator from automatically rebooting by using the CLI To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can modify the machine config pool (MCP) using the OpenShift CLI (oc) to prevent the MCO from making any changes to nodes in that pool. This prevents any reboots that would normally be part of the MCO update process. Note See second NOTE in Disabling the Machine Config Operator from automatically rebooting . Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure To pause or unpause automatic MCO update rebooting: Pause the autoreboot process: Update the MachineConfigPool custom resource to set the spec.paused field to true . Control plane (master) nodes USD oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/master Worker nodes USD oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/worker Verify that the MCP is paused: Control plane (master) nodes USD oc get machineconfigpool/master --template='{{.spec.paused}}' Worker nodes USD oc get machineconfigpool/worker --template='{{.spec.paused}}' Example output true The spec.paused field is true and the MCP is paused. Determine if the MCP has pending changes: # oc get machineconfigpool Example output If the UPDATED column is False and UPDATING is False , there are pending changes. When UPDATED is True and UPDATING is False , there are no pending changes. In the example, the worker node has pending changes. The control plane node does not have any pending changes. Important If there are pending changes (where both the Updated and Updating columns are False ), it is recommended to schedule a maintenance window for a reboot as early as possible. Use the following steps for unpausing the autoreboot process to apply the changes that were queued since the last reboot. Unpause the autoreboot process: Update the MachineConfigPool custom resource to set the spec.paused field to false . Control plane (master) nodes USD oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/master Worker nodes USD oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/worker Note By unpausing an MCP, the MCO applies all paused changes and reboots Red Hat Enterprise Linux CoreOS (RHCOS) as needed. Verify that the MCP is unpaused: Control plane (master) nodes USD oc get machineconfigpool/master --template='{{.spec.paused}}' Worker nodes USD oc get machineconfigpool/worker --template='{{.spec.paused}}' Example output false The spec.paused field is false and the MCP is unpaused. Determine if the MCP has pending changes: USD oc get machineconfigpool Example output If the MCP is applying any pending changes, the UPDATED column is False and the UPDATING column is True . When UPDATED is True and UPDATING is False , there are no further changes being made. In the example, the MCO is updating the worker node. 7.6.7. Refreshing failing subscriptions In Operator Lifecycle Manager (OLM), if you subscribe to an Operator that references images that are not accessible on your network, you can find jobs in the openshift-marketplace namespace that are failing with the following errors: Example output ImagePullBackOff for Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e" Example output rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade. You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and other related objects. After recreating the subscription, OLM then reinstalls the correct version of the Operator. Prerequisites You have a failing subscription that is unable to pull an inaccessible bundle image. You have confirmed that the correct bundle image is accessible. Procedure Get the names of the Subscription and ClusterServiceVersion objects from the namespace where the Operator is installed: USD oc get sub,csv -n <namespace> Example output NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded Delete the subscription: USD oc delete subscription <subscription_name> -n <namespace> Delete the cluster service version: USD oc delete csv <csv_name> -n <namespace> Get the names of any failing jobs and related config maps in the openshift-marketplace namespace: USD oc get job,configmap -n openshift-marketplace Example output NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s Delete the job: USD oc delete job <job_name> -n openshift-marketplace This ensures pods that try to pull the inaccessible image are not recreated. Delete the config map: USD oc delete configmap <configmap_name> -n openshift-marketplace Reinstall the Operator using OperatorHub in the web console. Verification Check that the Operator has been reinstalled successfully: USD oc get sub,csv,installplan -n <namespace> 7.6.8. Reinstalling Operators after failed uninstallation You must successfully and completely uninstall an Operator prior to attempting to reinstall the same Operator. Failure to fully uninstall the Operator properly can leave resources, such as a project or namespace, stuck in a "Terminating" state and cause "error resolving resource" messages. For example: Example Project resource description These types of issues can prevent an Operator from being reinstalled successfully. Warning Forced deletion of a namespace is not likely to resolve "Terminating" state issues and can lead to unstable or unpredictable cluster behavior, so it is better to try to find related resources that might be preventing the namespace from being deleted. For more information, see the Red Hat Knowledgebase Solution #4165791 , paying careful attention to the cautions and warnings. The following procedure shows how to troubleshoot when an Operator cannot be reinstalled because an existing custom resource definition (CRD) from a installation of the Operator is preventing a related namespace from deleting successfully. Procedure Check if there are any namespaces related to the Operator that are stuck in "Terminating" state: USD oc get namespaces Example output Check if there are any CRDs related to the Operator that are still present after the failed uninstallation: USD oc get crds Note CRDs are global cluster definitions; the actual custom resource (CR) instances related to the CRDs could be in other namespaces or be global cluster instances. If there are any CRDs that you know were provided or managed by the Operator and that should have been deleted after uninstallation, delete the CRD: USD oc delete crd <crd_name> Check if there are any remaining CR instances related to the Operator that are still present after uninstallation, and if so, delete the CRs: The type of CRs to search for can be difficult to determine after uninstallation and can require knowing what CRDs the Operator manages. For example, if you are troubleshooting an uninstallation of the etcd Operator, which provides the EtcdCluster CRD, you can search for remaining EtcdCluster CRs in a namespace: USD oc get EtcdCluster -n <namespace_name> Alternatively, you can search across all namespaces: USD oc get EtcdCluster --all-namespaces If there are any remaining CRs that should be removed, delete the instances: USD oc delete <cr_name> <cr_instance_name> -n <namespace_name> Check that the namespace deletion has successfully resolved: USD oc get namespace <namespace_name> Important If the namespace or other Operator resources are still not uninstalled cleanly, contact Red Hat Support. Reinstall the Operator using OperatorHub in the web console. Verification Check that the Operator has been reinstalled successfully: USD oc get sub,csv,installplan -n <namespace> Additional resources Deleting Operators from a cluster Adding Operators to a cluster 7.7. Investigating pod issues OpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host. A pod is the smallest compute unit that can be defined, deployed, and managed on OpenShift Container Platform 4.17. After a pod is defined, it is assigned to run on a node until its containers exit, or until it is removed. Depending on policy and exit code, pods are either removed after exiting or retained so that their logs can be accessed. The first thing to check when pod issues arise is the pod's status. If an explicit pod failure has occurred, observe the pod's error state to identify specific image, container, or pod network issues. Focus diagnostic data collection according to the error state. Review pod event messages, as well as pod and container log information. Diagnose issues dynamically by accessing running Pods on the command line, or start a debug pod with root access based on a problematic pod's deployment configuration. 7.7.1. Understanding pod error states Pod failures return explicit error states that can be observed in the status field in the output of oc get pods . Pod error states cover image, container, and container network related failures. The following table provides a list of pod error states along with their descriptions. Table 7.3. Pod error states Pod error state Description ErrImagePull Generic image retrieval error. ErrImagePullBackOff Image retrieval failed and is backed off. ErrInvalidImageName The specified image name was invalid. ErrImageInspect Image inspection did not succeed. ErrImageNeverPull PullPolicy is set to NeverPullImage and the target image is not present locally on the host. ErrRegistryUnavailable When attempting to retrieve an image from a registry, an HTTP error was encountered. ErrContainerNotFound The specified container is either not present or not managed by the kubelet, within the declared pod. ErrRunInitContainer Container initialization failed. ErrRunContainer None of the pod's containers started successfully. ErrKillContainer None of the pod's containers were killed successfully. ErrCrashLoopBackOff A container has terminated. The kubelet will not attempt to restart it. ErrVerifyNonRoot A container or image attempted to run with root privileges. ErrCreatePodSandbox Pod sandbox creation did not succeed. ErrConfigPodSandbox Pod sandbox configuration was not obtained. ErrKillPodSandbox A pod sandbox did not stop successfully. ErrSetupNetwork Network initialization failed. ErrTeardownNetwork Network termination failed. 7.7.2. Reviewing pod status You can query pod status and error states. You can also query a pod's associated deployment configuration and review base image availability. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). skopeo is installed. Procedure Switch into a project: USD oc project <project_name> List pods running within the namespace, as well as pod status, error states, restarts, and age: USD oc get pods Determine whether the namespace is managed by a deployment configuration: USD oc status If the namespace is managed by a deployment configuration, the output includes the deployment configuration name and a base image reference. Inspect the base image referenced in the preceding command's output: USD skopeo inspect docker://<image_reference> If the base image reference is not correct, update the reference in the deployment configuration: USD oc edit deployment/my-deployment When deployment configuration changes on exit, the configuration will automatically redeploy. Watch pod status as the deployment progresses, to determine whether the issue has been resolved: USD oc get pods -w Review events within the namespace for diagnostic information relating to pod failures: USD oc get events 7.7.3. Inspecting pod and container logs You can inspect pod and container logs for warnings and error messages related to explicit pod failures. Depending on policy and exit code, pod and container logs remain available after pods have been terminated. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Query logs for a specific pod: USD oc logs <pod_name> Query logs for a specific container within a pod: USD oc logs <pod_name> -c <container_name> Logs retrieved using the preceding oc logs commands are composed of messages sent to stdout within pods or containers. Inspect logs contained in /var/log/ within a pod. List log files and subdirectories contained in /var/log within a pod: USD oc exec <pod_name> -- ls -alh /var/log Example output total 124K drwxr-xr-x. 1 root root 33 Aug 11 11:23 . drwxr-xr-x. 1 root root 28 Sep 6 2022 .. -rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp -rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log -rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log -rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log -rw-r--r--. 1 root root 480 Jul 17 10:07 hawkey.log -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 lastlog drwx------. 2 root root 23 Aug 11 11:14 openshift-apiserver drwx------. 2 root root 6 Jul 10 10:31 private drwxr-xr-x. 1 root root 22 Mar 9 08:05 rhsm -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 wtmp Query a specific log file contained in /var/log within a pod: USD oc exec <pod_name> cat /var/log/<path_to_log> Example output 2023-07-10T10:29:38+0000 INFO --- logging initialized --- 2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms 2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile 2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories. 2023-07-10T10:29:38+0000 INFO Unable to read consumer identity 2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode. 2023-07-10T10:29:38+0000 INFO List log files and subdirectories contained in /var/log within a specific container: USD oc exec <pod_name> -c <container_name> ls /var/log Query a specific log file contained in /var/log within a specific container: USD oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log> 7.7.4. Accessing running pods You can review running pods dynamically by opening a shell inside a pod or by gaining network access through port forwarding. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Switch into the project that contains the pod you would like to access. This is necessary because the oc rsh command does not accept the -n namespace option: USD oc project <namespace> Start a remote shell into a pod: USD oc rsh <pod_name> 1 1 If a pod has multiple containers, oc rsh defaults to the first container unless -c <container_name> is specified. Start a remote shell into a specific container within a pod: USD oc rsh -c <container_name> pod/<pod_name> Create a port forwarding session to a port on a pod: USD oc port-forward <pod_name> <host_port>:<pod_port> 1 1 Enter Ctrl+C to cancel the port forwarding session. 7.7.5. Starting debug pods with root access You can start a debug pod with root access, based on a problematic pod's deployment or deployment configuration. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Start a debug pod with root access, based on a deployment. Obtain a project's deployment name: USD oc get deployment -n <project_name> Start a debug pod with root privileges, based on the deployment: USD oc debug deployment/my-deployment --as-root -n <project_name> Start a debug pod with root access, based on a deployment configuration. Obtain a project's deployment configuration name: USD oc get deploymentconfigs -n <project_name> Start a debug pod with root privileges, based on the deployment configuration: USD oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name> Note You can append -- <command> to the preceding oc debug commands to run individual commands within a debug pod, instead of running an interactive shell. 7.7.6. Copying files to and from pods and containers You can copy files to and from a pod to test configuration changes or gather diagnostic information. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Copy a file to a pod: USD oc cp <local_path> <pod_name>:/<path> -c <container_name> 1 1 The first container in a pod is selected if the -c option is not specified. Copy a file from a pod: USD oc cp <pod_name>:/<path> -c <container_name> <local_path> 1 1 The first container in a pod is selected if the -c option is not specified. Note For oc cp to function, the tar binary must be available within the container. 7.8. Troubleshooting the Source-to-Image process 7.8.1. Strategies for Source-to-Image troubleshooting Use Source-to-Image (S2I) to build reproducible, Docker-formatted container images. You can create ready-to-run images by injecting application source code into a container image and assembling a new image. The new image incorporates the base image (the builder) and built source. To determine where in the S2I process a failure occurs, you can observe the state of the pods relating to each of the following S2I stages: During the build configuration stage , a build pod is used to create an application container image from a base image and application source code. During the deployment configuration stage , a deployment pod is used to deploy application pods from the application container image that was built in the build configuration stage. The deployment pod also deploys other resources such as services and routes. The deployment configuration begins after the build configuration succeeds. After the deployment pod has started the application pods , application failures can occur within the running application pods. For instance, an application might not behave as expected even though the application pods are in a Running state. In this scenario, you can access running application pods to investigate application failures within a pod. When troubleshooting S2I issues, follow this strategy: Monitor build, deployment, and application pod status Determine the stage of the S2I process where the problem occurred Review logs corresponding to the failed stage 7.8.2. Gathering Source-to-Image diagnostic data The S2I tool runs a build pod and a deployment pod in sequence. The deployment pod is responsible for deploying the application pods based on the application container image created in the build stage. Watch build, deployment and application pod status to determine where in the S2I process a failure occurs. Then, focus diagnostic data collection accordingly. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Watch the pod status throughout the S2I process to determine at which stage a failure occurs: USD oc get pods -w 1 1 Use -w to monitor pods for changes until you quit the command using Ctrl+C . Review a failed pod's logs for errors. If the build pod fails , review the build pod's logs: USD oc logs -f pod/<application_name>-<build_number>-build Note Alternatively, you can review the build configuration's logs using oc logs -f bc/<application_name> . The build configuration's logs include the logs from the build pod. If the deployment pod fails , review the deployment pod's logs: USD oc logs -f pod/<application_name>-<build_number>-deploy Note Alternatively, you can review the deployment configuration's logs using oc logs -f dc/<application_name> . This outputs logs from the deployment pod until the deployment pod completes successfully. The command outputs logs from the application pods if you run it after the deployment pod has completed. After a deployment pod completes, its logs can still be accessed by running oc logs -f pod/<application_name>-<build_number>-deploy . If an application pod fails, or if an application is not behaving as expected within a running application pod , review the application pod's logs: USD oc logs -f pod/<application_name>-<build_number>-<random_string> 7.8.3. Gathering application diagnostic data to investigate application failures Application failures can occur within running application pods. In these situations, you can retrieve diagnostic information with these strategies: Review events relating to the application pods. Review the logs from the application pods, including application-specific log files that are not collected by the OpenShift Logging framework. Test application functionality interactively and run diagnostic tools in an application container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List events relating to a specific application pod. The following example retrieves events for an application pod named my-app-1-akdlg : USD oc describe pod/my-app-1-akdlg Review logs from an application pod: USD oc logs -f pod/my-app-1-akdlg Query specific logs within a running application pod. Logs that are sent to stdout are collected by the OpenShift Logging framework and are included in the output of the preceding command. The following query is only required for logs that are not sent to stdout. If an application log can be accessed without root privileges within a pod, concatenate the log file as follows: USD oc exec my-app-1-akdlg -- cat /var/log/my-application.log If root access is required to view an application log, you can start a debug container with root privileges and then view the log file from within the container. Start the debug container from the project's DeploymentConfig object. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation: USD oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log Note You can access an interactive shell with root access within the debug pod if you run oc debug dc/<deployment_configuration> --as-root without appending -- <command> . Test application functionality interactively and run diagnostic tools, in an application container with an interactive shell. Start an interactive shell on the application container: USD oc exec -it my-app-1-akdlg /bin/bash Test application functionality interactively from within the shell. For example, you can run the container's entry point command and observe the results. Then, test changes from the command line directly, before updating the source code and rebuilding the application container through the S2I process. Run diagnostic binaries available within the container. Note Root privileges are required to run some diagnostic binaries. In these situations you can start a debug pod with root access, based on a problematic pod's DeploymentConfig object, by running oc debug dc/<deployment_configuration> --as-root . Then, you can run diagnostic binaries as root from within the debug pod. If diagnostic binaries are not available within a container, you can run a host's diagnostic binaries within a container's namespace by using nsenter . The following example runs ip ad within a container's namespace, using the host`s ip binary. Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-cluster-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.17 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. Determine the target container ID: # crictl ps Determine the container's process ID. In this example, the target container ID is a7fe32346b120 : # crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print USD2}' Run ip ad within the container's namespace, using the host's ip binary. This example uses 31150 as the container's process ID. The nsenter command enters the namespace of a target process and runs a command in its namespace. Because the target process in this example is a container's process ID, the ip ad command is run in the container's namespace from the host: # nsenter -n -t 31150 -- ip ad Note Running a host's diagnostic binaries within a container's namespace is only possible if you are using a privileged container such as a debug node. 7.8.4. Additional resources See Source-to-Image (S2I) build for more details about the S2I build strategy. 7.9. Troubleshooting storage issues 7.9.1. Resolving multi-attach errors When a node crashes or shuts down abruptly, the attached ReadWriteOnce (RWO) volume is expected to be unmounted from the node so that it can be used by a pod scheduled on another node. However, mounting on a new node is not possible because the failed node is unable to unmount the attached volume. A multi-attach error is reported: Example output Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume "pvc-8837384d-69d7-40b2-b2e6-5df86943eef9" Volume is already used by pod(s) sso-mysql-1-ns6b4 Procedure To resolve the multi-attach issue, use one of the following solutions: Enable multiple attachments by using RWX volumes. For most storage solutions, you can use ReadWriteMany (RWX) volumes to prevent multi-attach errors. Recover or delete the failed node when using an RWO volume. For storage that does not support RWX, such as VMware vSphere, RWO volumes must be used instead. However, RWO volumes cannot be mounted on multiple nodes. If you encounter a multi-attach error message with an RWO volume, force delete the pod on a shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic persistent volumes are attached. USD oc delete pod <old_pod> --force=true --grace-period=0 This command deletes the volumes stuck on shutdown or crashed nodes after six minutes. 7.10. Troubleshooting Windows container workload issues 7.10.1. Windows Machine Config Operator does not install If you have completed the process of installing the Windows Machine Config Operator (WMCO), but the Operator is stuck in the InstallWaiting phase, your issue is likely caused by a networking issue. The WMCO requires your OpenShift Container Platform cluster to be configured with hybrid networking using OVN-Kubernetes; the WMCO cannot complete the installation process without hybrid networking available. This is necessary to manage nodes on multiple operating systems (OS) and OS variants. This must be completed during the installation of your cluster. For more information, see Configuring hybrid networking . 7.10.2. Investigating why Windows Machine does not become compute node There are various reasons why a Windows Machine does not become a compute node. The best way to investigate this problem is to collect the Windows Machine Config Operator (WMCO) logs. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. Procedure Run the following command to collect the WMCO logs: USD oc logs -f deployment/windows-machine-config-operator -n openshift-windows-machine-config-operator 7.10.3. Accessing a Windows node Windows nodes cannot be accessed using the oc debug node command; the command requires running a privileged pod on the node, which is not yet supported for Windows. Instead, a Windows node can be accessed using a secure shell (SSH) or Remote Desktop Protocol (RDP). An SSH bastion is required for both methods. 7.10.3.1. Accessing a Windows node using SSH You can access a Windows node by using a secure shell (SSH). Prerequisites You have installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. You have added the key used in the cloud-private-key secret and the key used when creating the cluster to the ssh-agent. For security reasons, remember to remove the keys from the ssh-agent after use. You have connected to the Windows node using an ssh-bastion pod . Procedure Access the Windows node by running the following command: USD ssh -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -o StrictHostKeyChecking=no \ -o ServerAliveInterval=30 -W %h:%p core@USD(oc get service --all-namespaces -l run=ssh-bastion \ -o go-template="{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}")' <username>@<windows_node_internal_ip> 1 2 1 Specify the cloud provider username, such as Administrator for Amazon Web Services (AWS) or capi for Microsoft Azure. 2 Specify the internal IP address of the node, which can be discovered by running the following command: USD oc get nodes <node_name> -o jsonpath={.status.addresses[?\(@.type==\"InternalIP\"\)].address} 7.10.3.2. Accessing a Windows node using RDP You can access a Windows node by using a Remote Desktop Protocol (RDP). Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. You have added the key used in the cloud-private-key secret and the key used when creating the cluster to the ssh-agent. For security reasons, remember to remove the keys from the ssh-agent after use. You have connected to the Windows node using an ssh-bastion pod . Procedure Run the following command to set up an SSH tunnel: USD ssh -L 2020:<windows_node_internal_ip>:3389 \ 1 core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template="{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}") 1 Specify the internal IP address of the node, which can be discovered by running the following command: USD oc get nodes <node_name> -o jsonpath={.status.addresses[?\(@.type==\"InternalIP\"\)].address} From within the resulting shell, SSH into the Windows node and run the following command to create a password for the user: C:\> net user <username> * 1 1 Specify the cloud provider user name, such as Administrator for AWS or capi for Azure. You can now remotely access the Windows node at localhost:2020 using an RDP client. 7.10.4. Collecting Kubernetes node logs for Windows containers Windows container logging works differently from Linux container logging; the Kubernetes node logs for Windows workloads are streamed to the C:\var\logs directory by default. Therefore, you must gather the Windows node logs from that directory. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. Procedure To view the logs under all directories in C:\var\logs , run the following command: USD oc adm node-logs -l kubernetes.io/os=windows --path= \ /ip-10-0-138-252.us-east-2.compute.internal containers \ /ip-10-0-138-252.us-east-2.compute.internal hybrid-overlay \ /ip-10-0-138-252.us-east-2.compute.internal kube-proxy \ /ip-10-0-138-252.us-east-2.compute.internal kubelet \ /ip-10-0-138-252.us-east-2.compute.internal pods You can now list files in the directories using the same command and view the individual log files. For example, to view the kubelet logs, run the following command: USD oc adm node-logs -l kubernetes.io/os=windows --path=/kubelet/kubelet.log 7.10.5. Collecting Windows application event logs The Get-WinEvent shim on the kubelet logs endpoint can be used to collect application event logs from Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. Procedure To view logs from all applications logging to the event logs on the Windows machine, run: USD oc adm node-logs -l kubernetes.io/os=windows --path=journal The same command is executed when collecting logs with oc adm must-gather . Other Windows application logs from the event log can also be collected by specifying the respective service with a -u flag. For example, you can run the following command to collect logs for the docker runtime service: USD oc adm node-logs -l kubernetes.io/os=windows --path=journal -u docker 7.10.6. Collecting Docker logs for Windows containers The Windows Docker service does not stream its logs to stdout, but instead, logs to the event log for Windows. You can view the Docker event logs to investigate issues you think might be caused by the Windows Docker service. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. Procedure SSH into the Windows node and enter PowerShell: C:\> powershell View the Docker logs by running the following command: C:\> Get-EventLog -LogName Application -Source Docker 7.10.7. Additional resources Containers on Windows troubleshooting Troubleshoot host and container image mismatches Docker for Windows troubleshooting Common Kubernetes problems with Windows 7.11. Investigating monitoring issues OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. In OpenShift Container Platform 4.17, cluster administrators can optionally enable monitoring for user-defined projects. Use these procedures if the following issues occur: Your own metrics are unavailable. Prometheus is consuming a lot of disk space. The KubePersistentVolumeFillingUp alert is firing for Prometheus. 7.11.1. Investigating why user-defined project metrics are unavailable ServiceMonitor resources enable you to determine how to use the metrics exposed by a service in user-defined projects. Follow the steps outlined in this procedure if you have created a ServiceMonitor resource but cannot see any corresponding metrics in the Metrics UI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have enabled and configured monitoring for user-defined projects. You have created a ServiceMonitor resource. Procedure Check that the corresponding labels match in the service and ServiceMonitor resource configurations. Obtain the label defined in the service. The following example queries the prometheus-example-app service in the ns1 project: USD oc -n ns1 get service prometheus-example-app -o yaml Example output labels: app: prometheus-example-app Check that the matchLabels definition in the ServiceMonitor resource configuration matches the label output in the preceding step. The following example queries the prometheus-example-monitor service monitor in the ns1 project: USD oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml Example output apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app Note You can check service and ServiceMonitor resource labels as a developer with view permissions for the project. Inspect the logs for the Prometheus Operator in the openshift-user-workload-monitoring project. List the pods in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get pods Example output NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m Obtain the logs from the prometheus-operator container in the prometheus-operator pod. In the following example, the pod is called prometheus-operator-776fcbbd56-2nbfm : USD oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator If there is a issue with the service monitor, the logs might include an error similar to this example: level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg="skipping servicemonitor" error="it accesses file system via bearer token file which Prometheus specification prohibits" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload Review the target status for your endpoint on the Metrics targets page in the OpenShift Container Platform web console UI. Log in to the OpenShift Container Platform web console and navigate to Observe Targets in the Administrator perspective. Locate the metrics endpoint in the list, and review the status of the target in the Status column. If the Status is Down , click the URL for the endpoint to view more information on the Target Details page for that metrics target. Configure debug level logging for the Prometheus Operator in the openshift-user-workload-monitoring project. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: debug for prometheusOperator under data/config.yaml to set the log level to debug : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug # ... Save the file to apply the changes. The affected prometheus-operator pod is automatically redeployed. Confirm that the debug log-level has been applied to the prometheus-operator deployment in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Debug level logging will show all calls made by the Prometheus Operator. Check that the prometheus-operator pod is running: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized Prometheus Operator loglevel value is included in the config map, the prometheus-operator pod might not restart successfully. Review the debug logs to see if the Prometheus Operator is using the ServiceMonitor resource. Review the logs for other related errors. Additional resources Enabling monitoring for user-defined projects See Specifying how a service is monitored for details on how to create a service monitor or pod monitor See Getting detailed information about a metrics target 7.11.2. Determining why Prometheus is consuming a lot of disk space Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id attribute is unbound because it has an infinite number of possible values. Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space. You can use the following measures when Prometheus consumes a lot of disk: Check the time series database (TSDB) status using the Prometheus HTTP API for more information about which labels are creating the most time series data. Doing so requires cluster administrator privileges. Check the number of scrape samples that are being collected. Reduce the number of unique time series that are created by reducing the number of unbound attributes that are assigned to user-defined metrics. Note Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective, navigate to Observe Metrics . Enter a Prometheus Query Language (PromQL) query in the Expression field. The following example queries help to identify high cardinality metrics that might result in high disk space consumption: By running the following query, you can identify the ten jobs that have the highest number of scrape samples: topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling))) By running the following query, you can pinpoint time series churn by identifying the ten jobs that have created the most time series data in the last hour: topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h]))) Investigate the number of unbound label values assigned to metrics with higher than expected scrape sample counts: If the metrics relate to a user-defined project , review the metrics key-value pairs assigned to your workload. These are implemented through Prometheus client libraries at the application level. Try to limit the number of unbound attributes referenced in your labels. If the metrics relate to a core OpenShift Container Platform project , create a Red Hat support case on the Red Hat Customer Portal . Review the TSDB status using the Prometheus HTTP API by following these steps when logged in as a cluster administrator: Get the Prometheus API route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}') Extract an authentication token by running the following command: USD TOKEN=USD(oc whoami -t) Query the TSDB status for Prometheus by running the following command: USD curl -H "Authorization: Bearer USDTOKEN" -k "https://USDHOST/api/v1/status/tsdb" Example output "status": "success","data":{"headStats":{"numSeries":507473, "numLabelPairs":19832,"chunkCount":946298,"minTime":1712253600010, "maxTime":1712257935346},"seriesCountByMetricName": [{"name":"etcd_request_duration_seconds_bucket","value":51840}, {"name":"apiserver_request_sli_duration_seconds_bucket","value":47718}, ... Additional resources Setting a scrape sample limit for user-defined projects 7.11.3. Resolving the KubePersistentVolumeFillingUp alert firing for Prometheus As a cluster administrator, you can resolve the KubePersistentVolumeFillingUp alert being triggered for Prometheus. The critical alert fires when a persistent volume (PV) claimed by a prometheus-k8s-* pod in the openshift-monitoring project has less than 3% total space remaining. This can cause Prometheus to function abnormally. Note There are two KubePersistentVolumeFillingUp alerts: Critical alert : The alert with the severity="critical" label is triggered when the mounted PV has less than 3% total space remaining. Warning alert : The alert with the severity="warning" label is triggered when the mounted PV has less than 15% total space remaining and is expected to fill up within four days. To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure List the size of all TSDB blocks, sorted from oldest to newest, by running the following command: USD oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \ 2 -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'cd /prometheus/;du -hs USD(ls -dtr */ | grep -Eo "[0-9|A-Z]{26}")' 1 2 Replace <prometheus_k8s_pod_name> with the pod mentioned in the KubePersistentVolumeFillingUp alert description. Example output 308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B Identify which and how many blocks could be removed, then remove the blocks. The following example command removes the three oldest Prometheus TSDB blocks from the prometheus-k8s-0 pod: USD oc debug prometheus-k8s-0 -n openshift-monitoring \ -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 \ -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'ls -latr /prometheus/ | egrep -o "[0-9|A-Z]{26}" | head -3 | \ while read BLOCK; do rm -r /prometheus/USDBLOCK; done' Verify the usage of the mounted PV and ensure there is enough space available by running the following command: USD oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \ 2 -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') -- df -h /prometheus/ 1 2 Replace <prometheus_k8s_pod_name> with the pod mentioned in the KubePersistentVolumeFillingUp alert description. The following example output shows the mounted PV claimed by the prometheus-k8s-0 pod that has 63% of space remaining: Example output Starting pod/prometheus-k8s-0-debug-j82w4 ... Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod ... 7.12. Diagnosing OpenShift CLI ( oc ) issues 7.12.1. Understanding OpenShift CLI ( oc ) log levels With the OpenShift CLI ( oc ), you can create applications and manage OpenShift Container Platform projects from a terminal. If oc command-specific issues arise, increase the oc log level to output API request, API response, and curl request details generated by the command. This provides a granular view of a particular oc command's underlying operation, which in turn might provide insight into the nature of a failure. oc log levels range from 1 to 10. The following table provides a list of oc log levels, along with their descriptions. Table 7.4. OpenShift CLI (oc) log levels Log level Description 1 to 5 No additional logging to stderr. 6 Log API requests to stderr. 7 Log API requests and headers to stderr. 8 Log API requests, headers, and body, plus API response headers and body to stderr. 9 Log API requests, headers, and body, API response headers and body, plus curl requests to stderr. 10 Log API requests, headers, and body, API response headers and body, plus curl requests to stderr, in verbose detail. 7.12.2. Specifying OpenShift CLI ( oc ) log levels You can investigate OpenShift CLI ( oc ) issues by increasing the command's log level. The OpenShift Container Platform user's current session token is typically included in logged curl requests where required. You can also obtain the current user's session token manually, for use when testing aspects of an oc command's underlying process step-by-step. Prerequisites Install the OpenShift CLI ( oc ). Procedure Specify the oc log level when running an oc command: USD oc <command> --loglevel <log_level> where: <command> Specifies the command you are running. <log_level> Specifies the log level to apply to the command. To obtain the current user's session token, run the following command: USD oc whoami -t Example output sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6...
[ "ssh <user_name>@<load_balancer> systemctl status haproxy", "ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623'", "ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623'", "dig <wildcard_fqdn> @<dns_server>", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug 1", "./openshift-install create ignition-configs --dir=./install_dir", "tail -f ~/<installation_directory>/.openshift_install.log", "ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service", "oc adm node-logs --role=master -u kubelet", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=master -u crio", "ssh [email protected]_name.sub_domain.domain journalctl -b -f -u crio.service", "curl -I http://<http_server_fqdn>:<port>/bootstrap.ign 1", "grep -is 'bootstrap.ign' /var/log/httpd/access_log", "ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service", "ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'", "curl -I http://<http_server_fqdn>:<port>/master.ign 1", "grep -is 'master.ign' /var/log/httpd/access_log", "oc get nodes", "oc describe node <master_node>", "oc get daemonsets -n openshift-ovn-kubernetes", "oc get pods -n openshift-ovn-kubernetes", "oc logs <ovn-k_pod> -n openshift-ovn-kubernetes", "oc get network.config.openshift.io cluster -o yaml", "./openshift-install create manifests", "oc get pods -n openshift-network-operator", "oc logs pod/<network_operator_pod_name> -n openshift-network-operator", "oc adm node-logs --role=master -u kubelet", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=master -u crio", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service", "oc adm node-logs --role=master --path=openshift-apiserver", "oc adm node-logs --role=master --path=openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "curl https://api-int.<cluster_name>:22623/config/master", "dig api-int.<cluster_name> @<dns_server>", "dig -x <load_balancer_mco_ip_address> @<dns_server>", "ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master", "ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking", "openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text", "oc get pods -n openshift-etcd", "oc get pods -n openshift-etcd-operator", "oc describe pod/<pod_name> -n <namespace>", "oc logs pod/<pod_name> -n <namespace>", "oc logs pod/<pod_name> -c <container_name> -n <namespace>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd-", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>'", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "oc adm node-logs --role=master -u kubelet", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired'", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service | grep -is 'x509: certificate has expired'", "curl -I http://<http_server_fqdn>:<port>/worker.ign 1", "grep -is 'worker.ign' /var/log/httpd/access_log", "oc get nodes", "oc describe node <worker_node>", "oc get pods -n openshift-machine-api", "oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api", "oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator", "oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy", "oc adm node-logs --role=worker -u kubelet", "ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=worker -u crio", "ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service", "oc adm node-logs --role=worker --path=sssd", "oc adm node-logs --role=worker --path=sssd/sssd.log", "ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log", "ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a", "ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "curl https://api-int.<cluster_name>:22623/config/worker", "dig api-int.<cluster_name> @<dns_server>", "dig -x <load_balancer_mco_ip_address> @<dns_server>", "ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker", "ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking", "openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text", "oc get clusteroperators", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc describe clusteroperator <operator_name>", "oc get pods -n <operator_namespace>", "oc describe pod/<operator_pod_name> -n <operator_namespace>", "oc logs pod/<operator_pod_name> -n <operator_namespace>", "oc get pod -o \"jsonpath={range .status.containerStatuses[*]}{.name}{'\\t'}{.state}{'\\t'}{.image}{'\\n'}{end}\" <operator_pod_name> -n <operator_namespace>", "oc adm release info <image_path>:<tag> --commits", "./openshift-install gather bootstrap --dir <installation_directory> 1", "./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address> 5", "INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"", "oc get nodes", "oc adm top nodes", "oc adm top node my-node", "oc debug node/my-node", "chroot /host", "systemctl is-active kubelet", "systemctl status kubelet", "oc adm node-logs --role=master -u kubelet 1", "oc adm node-logs --role=master --path=openshift-apiserver", "oc adm node-logs --role=master --path=openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log", "oc debug node/my-node", "chroot /host", "systemctl is-active crio", "systemctl status crio.service", "oc adm node-logs --role=master -u crio", "oc adm node-logs <node_name> -u crio", "ssh core@<node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service", "Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory", "can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks.", "oc adm cordon <node_name>", "oc adm drain <node_name> --ignore-daemonsets --delete-emptydir-data", "ssh [email protected] sudo -i", "systemctl stop kubelet", ".. for pod in USD(crictl pods -q); do if [[ \"USD(crictl inspectp USDpod | jq -r .status.linux.namespaces.options.network)\" != \"NODE\" ]]; then crictl rmp -f USDpod; fi; done", "crictl rmp -fa", "systemctl stop crio", "crio wipe -f", "systemctl start crio systemctl start kubelet", "oc get nodes", "NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.30.3", "oc adm uncordon <node_name>", "NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.30.3", "rpm-ostree kargs --append='crashkernel=256M'", "systemctl enable kdump.service", "systemctl reboot", "variant: openshift version: 4.17.0 metadata: name: 99-worker-kdump 1 labels: machineconfiguration.openshift.io/role: worker 2 openshift: kernel_arguments: 3 - crashkernel=256M storage: files: - path: /etc/kdump.conf 4 mode: 0644 overwrite: true contents: inline: | path /var/crash core_collector makedumpfile -l --message-level 7 -d 31 - path: /etc/sysconfig/kdump 5 mode: 0644 overwrite: true contents: inline: | KDUMP_COMMANDLINE_REMOVE=\"hugepages hugepagesz slub_debug quiet log_buf_len swiotlb\" KDUMP_COMMANDLINE_APPEND=\"irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable\" 6 KEXEC_ARGS=\"-s\" KDUMP_IMG=\"vmlinuz\" systemd: units: - name: kdump.service enabled: true", "nfs server.example.com:/export/cores core_collector makedumpfile -l --message-level 7 -d 31 extra_bins /sbin/mount.nfs extra_modules nfs nfsv3 nfs_layout_nfsv41_files blocklayoutdriver nfs_layout_flexfiles nfs_layout_nfsv41_files", "butane 99-worker-kdump.bu -o 99-worker-kdump.yaml", "oc create -f 99-worker-kdump.yaml", "systemctl --failed", "journalctl -u <unit>.service", "NODEIP_HINT=192.0.2.1", "echo -n 'NODEIP_HINT=192.0.2.1' | base64 -w0", "Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx==", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-nodeip-hint-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-nodeip-hint-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration", "[connection] id=eno1 type=ethernet interface-name=eno1 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20", "[connection] id=eno2 type=ethernet interface-name=eno2 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20", "[connection] id=bond1 type=bond interface-name=bond1 autoconnect=true connection.autoconnect-slaves=1 autoconnect-priority=20 [bond] mode=802.3ad miimon=100 xmit_hash_policy=\"layer3+4\" [ipv4] method=auto", "base64 <directory_path>/en01.config", "base64 <directory_path>/eno2.config", "base64 <directory_path>/bond1.config", "export ROLE=<machine_role>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-USD{ROLE}-sec-bridge-cni spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:;base64,<base-64-encoded-contents-for-bond1.conf> path: /etc/NetworkManager/system-connections/bond1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno1.conf> path: /etc/NetworkManager/system-connections/eno1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno2.conf> path: /etc/NetworkManager/system-connections/eno2.nmconnection filesystem: root mode: 0600", "oc create -f <machine_config_file_name>", "bond1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-extra-bridge spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/ovnk/extra_bridge mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond1 filesystem: root", "oc create -f <machine_config_file_name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-br-ex-override spec: config: ignition: version: 3.2.0 storage: files: - path: /var/lib/ovnk/iface_default_hint mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond0 1 filesystem: root", "oc create -f <machine_config_file_name>", "oc get nodes -o json | grep --color exgw-ip-addresses", "\"k8s.ovn.org/l3-gateway-config\": \\\"exgw-ip-address\\\":\\\"172.xx.xx.yy/24\\\",\\\"next-hops\\\":[\\\"xx.xx.xx.xx\\\"],", "oc debug node/<node_name> -- chroot /host sh -c \"ip a | grep mtu | grep br-ex\"", "Starting pod/worker-1-debug To use host binaries, run `chroot /host` 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 6: br-ex1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000", "oc debug node/<node_name> -- chroot /host sh -c \"ip a | grep -A1 -E 'br-ex|bond0'", "Starting pod/worker-1-debug To use host binaries, run `chroot /host` sh-5.1# ip a | grep -A1 -E 'br-ex|bond0' 2: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff -- 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff inet 10.xx.xx.xx/21 brd 10.xx.xx.255 scope global dynamic noprefixroute br-ex", "E0514 12:47:17.998892 2281 daemon.go:1350] content mismatch for file /etc/systemd/system/ovs-vswitchd.service: [Unit]", "oc debug node/<node_name>", "chroot /host", "ovs-appctl vlog/list", "console syslog file ------- ------ ------ backtrace OFF INFO INFO bfd OFF INFO INFO bond OFF INFO INFO bridge OFF INFO INFO bundle OFF INFO INFO bundles OFF INFO INFO cfm OFF INFO INFO collectors OFF INFO INFO command_line OFF INFO INFO connmgr OFF INFO INFO conntrack OFF INFO INFO conntrack_tp OFF INFO INFO coverage OFF INFO INFO ct_dpif OFF INFO INFO daemon OFF INFO INFO daemon_unix OFF INFO INFO dns_resolve OFF INFO INFO dpdk OFF INFO INFO", "Restart=always ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /var/lib/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /etc/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /run/openvswitch' ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg", "systemctl daemon-reload", "systemctl restart ovs-vswitchd", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master 1 name: 99-change-ovs-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - dropins: - contents: | [Service] ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg 2 ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg name: 20-ovs-vswitchd-restart.conf name: ovs-vswitchd.service", "oc apply -f 99-change-ovs-loglevel.yaml", "oc adm node-logs <node_name> -u ovs-vswitchd", "journalctl -b -f -u ovs-vswitchd.service", "oc get subs -n <operator_namespace>", "oc describe sub <subscription_name> -n <operator_namespace>", "Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy", "oc get catalogsources -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m", "oc describe catalogsource example-catalog -n openshift-marketplace", "Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m", "oc describe pod example-catalog-bwt8z -n openshift-marketplace", "Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull", "oc get clusteroperators", "oc get pod -n <operator_namespace>", "oc describe pod <operator_pod_name> -n <operator_namespace>", "oc debug node/my-node", "chroot /host", "crictl ps", "crictl ps --name network-operator", "oc get pods -n <operator_namespace>", "oc logs pod/<pod_name> -n <operator_namespace>", "oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: true 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: false 1", "oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/master", "oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/worker", "oc get machineconfigpool/master --template='{{.spec.paused}}'", "oc get machineconfigpool/worker --template='{{.spec.paused}}'", "true", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False", "oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/master", "oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/worker", "oc get machineconfigpool/master --template='{{.spec.paused}}'", "oc get machineconfigpool/worker --template='{{.spec.paused}}'", "false", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"", "rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host", "oc get sub,csv -n <namespace>", "NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded", "oc delete subscription <subscription_name> -n <namespace>", "oc delete csv <csv_name> -n <namespace>", "oc get job,configmap -n openshift-marketplace", "NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s", "oc delete job <job_name> -n openshift-marketplace", "oc delete configmap <configmap_name> -n openshift-marketplace", "oc get sub,csv,installplan -n <namespace>", "message: 'Failed to delete all resource types, 1 remaining: Internal error occurred: error resolving resource'", "oc get namespaces", "operator-ns-1 Terminating", "oc get crds", "oc delete crd <crd_name>", "oc get EtcdCluster -n <namespace_name>", "oc get EtcdCluster --all-namespaces", "oc delete <cr_name> <cr_instance_name> -n <namespace_name>", "oc get namespace <namespace_name>", "oc get sub,csv,installplan -n <namespace>", "oc project <project_name>", "oc get pods", "oc status", "skopeo inspect docker://<image_reference>", "oc edit deployment/my-deployment", "oc get pods -w", "oc get events", "oc logs <pod_name>", "oc logs <pod_name> -c <container_name>", "oc exec <pod_name> -- ls -alh /var/log", "total 124K drwxr-xr-x. 1 root root 33 Aug 11 11:23 . drwxr-xr-x. 1 root root 28 Sep 6 2022 .. -rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp -rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log -rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log -rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log -rw-r--r--. 1 root root 480 Jul 17 10:07 hawkey.log -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 lastlog drwx------. 2 root root 23 Aug 11 11:14 openshift-apiserver drwx------. 2 root root 6 Jul 10 10:31 private drwxr-xr-x. 1 root root 22 Mar 9 08:05 rhsm -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 wtmp", "oc exec <pod_name> cat /var/log/<path_to_log>", "2023-07-10T10:29:38+0000 INFO --- logging initialized --- 2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms 2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile 2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories. 2023-07-10T10:29:38+0000 INFO Unable to read consumer identity 2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode. 2023-07-10T10:29:38+0000 INFO", "oc exec <pod_name> -c <container_name> ls /var/log", "oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>", "oc project <namespace>", "oc rsh <pod_name> 1", "oc rsh -c <container_name> pod/<pod_name>", "oc port-forward <pod_name> <host_port>:<pod_port> 1", "oc get deployment -n <project_name>", "oc debug deployment/my-deployment --as-root -n <project_name>", "oc get deploymentconfigs -n <project_name>", "oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>", "oc cp <local_path> <pod_name>:/<path> -c <container_name> 1", "oc cp <pod_name>:/<path> -c <container_name> <local_path> 1", "oc get pods -w 1", "oc logs -f pod/<application_name>-<build_number>-build", "oc logs -f pod/<application_name>-<build_number>-deploy", "oc logs -f pod/<application_name>-<build_number>-<random_string>", "oc describe pod/my-app-1-akdlg", "oc logs -f pod/my-app-1-akdlg", "oc exec my-app-1-akdlg -- cat /var/log/my-application.log", "oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log", "oc exec -it my-app-1-akdlg /bin/bash", "oc debug node/my-cluster-node", "chroot /host", "crictl ps", "crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print USD2}'", "nsenter -n -t 31150 -- ip ad", "Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume \"pvc-8837384d-69d7-40b2-b2e6-5df86943eef9\" Volume is already used by pod(s) sso-mysql-1-ns6b4", "oc delete pod <old_pod> --force=true --grace-period=0", "oc logs -f deployment/windows-machine-config-operator -n openshift-windows-machine-config-operator", "ssh -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -W %h:%p core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")' <username>@<windows_node_internal_ip> 1 2", "oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}", "ssh -L 2020:<windows_node_internal_ip>:3389 \\ 1 core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")", "oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}", "C:\\> net user <username> * 1", "oc adm node-logs -l kubernetes.io/os=windows --path= /ip-10-0-138-252.us-east-2.compute.internal containers /ip-10-0-138-252.us-east-2.compute.internal hybrid-overlay /ip-10-0-138-252.us-east-2.compute.internal kube-proxy /ip-10-0-138-252.us-east-2.compute.internal kubelet /ip-10-0-138-252.us-east-2.compute.internal pods", "oc adm node-logs -l kubernetes.io/os=windows --path=/kubelet/kubelet.log", "oc adm node-logs -l kubernetes.io/os=windows --path=journal", "oc adm node-logs -l kubernetes.io/os=windows --path=journal -u docker", "C:\\> powershell", "C:\\> Get-EventLog -LogName Application -Source Docker", "oc -n ns1 get service prometheus-example-app -o yaml", "labels: app: prometheus-example-app", "oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml", "apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app", "oc -n openshift-user-workload-monitoring get pods", "NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m", "oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator", "level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug", "oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-user-workload-monitoring get pods", "topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))", "topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))", "HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}')", "TOKEN=USD(oc whoami -t)", "curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/status/tsdb\"", "\"status\": \"success\",\"data\":{\"headStats\":{\"numSeries\":507473, \"numLabelPairs\":19832,\"chunkCount\":946298,\"minTime\":1712253600010, \"maxTime\":1712257935346},\"seriesCountByMetricName\": [{\"name\":\"etcd_request_duration_seconds_bucket\",\"value\":51840}, {\"name\":\"apiserver_request_sli_duration_seconds_bucket\",\"value\":47718},", "oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'cd /prometheus/;du -hs USD(ls -dtr */ | grep -Eo \"[0-9|A-Z]{26}\")'", "308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B", "oc debug prometheus-k8s-0 -n openshift-monitoring -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'ls -latr /prometheus/ | egrep -o \"[0-9|A-Z]{26}\" | head -3 | while read BLOCK; do rm -r /prometheus/USDBLOCK; done'", "oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- df -h /prometheus/", "Starting pod/prometheus-k8s-0-debug-j82w4 Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod", "oc <command> --loglevel <log_level>", "oc whoami -t", "sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/support/troubleshooting
Chapter 6. Installer-provisioned postinstallation configuration
Chapter 6. Installer-provisioned postinstallation configuration After successfully deploying an installer-provisioned cluster, consider the following postinstallation procedures. 6.1. Optional: Configuring NTP for disconnected clusters OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. Use the following procedure to configure NTP servers on the control plane nodes and configure worker nodes as NTP clients of the control plane nodes after a successful deployment. OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server. Procedure Install Butane on your installation host by using the following command: USD sudo dnf -y install butane Create a Butane config, 99-master-chrony-conf-override.bu , including the contents of the chrony.conf file for the control plane nodes. Note See "Creating machine configs with Butane" for information about Butane. Butane config example variant: openshift version: 4.14.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml , containing the configuration to be delivered to the control plane nodes: USD butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml Create a Butane config, 99-worker-chrony-conf-override.bu , including the contents of the chrony.conf file for the worker nodes that references the NTP servers on the control plane nodes. Butane config example variant: openshift version: 4.14.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml , containing the configuration to be delivered to the worker nodes: USD butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml Apply the 99-master-chrony-conf-override.yaml policy to the control plane nodes. USD oc apply -f 99-master-chrony-conf-override.yaml Example output machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created Apply the 99-worker-chrony-conf-override.yaml policy to the worker nodes. USD oc apply -f 99-worker-chrony-conf-override.yaml Example output machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created Check the status of the applied NTP settings. USD oc describe machineconfigpool 6.2. Enabling a provisioning network after installation The assisted installer and installer-provisioned installation for bare metal clusters provide the ability to deploy a cluster without a provisioning network. This capability is for scenarios such as proof-of-concept clusters or deploying exclusively with Redfish virtual media when each node's baseboard management controller is routable via the baremetal network. You can enable a provisioning network after installation using the Cluster Baremetal Operator (CBO). Prerequisites A dedicated physical network must exist, connected to all worker and control plane nodes. You must isolate the native, untagged physical network. The network cannot have a DHCP server when the provisioningNetwork configuration setting is set to Managed . You can omit the provisioningInterface setting in OpenShift Container Platform 4.10 to use the bootMACAddress configuration setting. Procedure When setting the provisioningInterface setting, first identify the provisioning interface name for the cluster nodes. For example, eth0 or eno1 . Enable the Preboot eXecution Environment (PXE) on the provisioning network interface of the cluster nodes. Retrieve the current state of the provisioning network and save it to a provisioning custom resource (CR) file: USD oc get provisioning -o yaml > enable-provisioning-nw.yaml Modify the provisioning CR file: USD vim ~/enable-provisioning-nw.yaml Scroll down to the provisioningNetwork configuration setting and change it from Disabled to Managed . Then, add the provisioningIP , provisioningNetworkCIDR , provisioningDHCPRange , provisioningInterface , and watchAllNameSpaces configuration settings after the provisioningNetwork setting. Provide appropriate values for each setting. apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: 1 provisioningIP: 2 provisioningNetworkCIDR: 3 provisioningDHCPRange: 4 provisioningInterface: 5 watchAllNameSpaces: 6 1 The provisioningNetwork is one of Managed , Unmanaged , or Disabled . When set to Managed , Metal3 manages the provisioning network and the CBO deploys the Metal3 pod with a configured DHCP server. When set to Unmanaged , the system administrator configures the DHCP server manually. 2 The provisioningIP is the static IP address that the DHCP server and ironic use to provision the network. This static IP address must be within the provisioning subnet, and outside of the DHCP range. If you configure this setting, it must have a valid IP address even if the provisioning network is Disabled . The static IP address is bound to the metal3 pod. If the metal3 pod fails and moves to another server, the static IP address also moves to the new server. 3 The Classless Inter-Domain Routing (CIDR) address. If you configure this setting, it must have a valid CIDR address even if the provisioning network is Disabled . For example: 192.168.0.1/24 . 4 The DHCP range. This setting is only applicable to a Managed provisioning network. Omit this configuration setting if the provisioning network is Disabled . For example: 192.168.0.64, 192.168.0.253 . 5 The NIC name for the provisioning interface on cluster nodes. The provisioningInterface setting is only applicable to Managed and Unmanaged provisioning networks. Omit the provisioningInterface configuration setting if the provisioning network is Disabled . Omit the provisioningInterface configuration setting to use the bootMACAddress configuration setting instead. 6 Set this setting to true if you want metal3 to watch namespaces other than the default openshift-machine-api namespace. The default value is false . Save the changes to the provisioning CR file. Apply the provisioning CR file to the cluster: USD oc apply -f enable-provisioning-nw.yaml 6.3. Services for an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 6.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 6.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 6.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 6.3.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private
[ "sudo dnf -y install butane", "variant: openshift version: 4.14.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan", "butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml", "variant: openshift version: 4.14.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony", "butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml", "oc apply -f 99-master-chrony-conf-override.yaml", "machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created", "oc apply -f 99-worker-chrony-conf-override.yaml", "machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created", "oc describe machineconfigpool", "oc get provisioning -o yaml > enable-provisioning-nw.yaml", "vim ~/enable-provisioning-nw.yaml", "apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: 1 provisioningIP: 2 provisioningNetworkCIDR: 3 provisioningDHCPRange: 4 provisioningInterface: 5 watchAllNameSpaces: 6", "oc apply -f enable-provisioning-nw.yaml", "Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10", "# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache", "curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "curl https://api.<cluster_name>.<base_domain>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-post-installation-configuration
Dashboard Guide
Dashboard Guide Red Hat Ceph Storage 8 Monitoring Ceph Cluster with Ceph Dashboard Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/dashboard_guide/index
Chapter 23. KafkaAuthorizationKeycloak schema reference
Chapter 23. KafkaAuthorizationKeycloak schema reference Used in: KafkaClusterSpec The type property is a discriminator that distinguishes use of the KafkaAuthorizationKeycloak type from KafkaAuthorizationSimple , KafkaAuthorizationOpa , KafkaAuthorizationCustom . It must have the value keycloak for the type KafkaAuthorizationKeycloak . Property Property type Description type string Must be keycloak . clientId string OAuth Client ID which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. tokenEndpointUri string Authorization server token endpoint URI. tlsTrustedCertificates CertSecretSource array Trusted certificates for TLS connection to the OAuth server. disableTlsHostnameVerification boolean Enable or disable TLS hostname verification. Default value is false . delegateToKafkaAcls boolean Whether authorization decision should be delegated to the 'Simple' authorizer if DENIED by Red Hat Single Sign-On Authorization Services policies. Default value is false . grantsRefreshPeriodSeconds integer The time between two consecutive grants refresh runs in seconds. The default value is 60. grantsRefreshPoolSize integer The number of threads to use to refresh grants for active sessions. The more threads, the more parallelism, so the sooner the job completes. However, using more threads places a heavier load on the authorization server. The default value is 5. grantsGcPeriodSeconds integer The time, in seconds, between consecutive runs of a job that cleans stale grants from the cache. The default value is 300. grantsAlwaysLatest boolean Controls whether the latest grants are fetched for a new session. When enabled, grants are retrieved from Red Hat Single Sign-On and cached for the user. The default value is false . superUsers string array List of super users. Should contain list of user principals which should get unlimited access rights. connectTimeoutSeconds integer The connect timeout in seconds when connecting to authorization server. If not set, the effective connect timeout is 60 seconds. readTimeoutSeconds integer The read timeout in seconds when connecting to authorization server. If not set, the effective read timeout is 60 seconds. httpRetries integer The maximum number of retries to attempt if an initial HTTP request fails. If not set, the default is to not attempt any retries. enableMetrics boolean Enable or disable OAuth metrics. The default value is false . includeAcceptHeader boolean Whether the Accept header should be set in requests to the authorization servers. The default value is true . grantsMaxIdleTimeSeconds integer The time, in seconds, after which an idle grant can be evicted from the cache. The default value is 300.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkaauthorizationkeycloak-reference
Chapter 2. Differences from upstream OpenJDK 17
Chapter 2. Differences from upstream OpenJDK 17 Red Hat build of OpenJDK in Red Hat Enterprise Linux contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow Red Hat Enterprise Linux updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 17 changes: FIPS support. Red Hat build of OpenJDK 17 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 17 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 17 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all of the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.9/rn-openjdk-diff-from-upstream
E.8. VDB Editor
E.8. VDB Editor E.8.1. VDB Editor A VDB, or virtual database is a container for components used to integrate data from multiple data sources, so that they can be accessed in a federated manner through a single, uniform API. A VDB contains models, which define the structural characteristics of data sources, views, and Web services. The VDB Editor, provides the means to manage the contents of the VDB as well as its deployable (validation) state. The following image shows the VDB Editor : Figure E.18. VDB Editor The VDB Editor contains the following tabs and buttons: Table E.1. Tabs and Buttons on the VDB Editor Name Description Schemas This tab manages the schema files included in the VDB. UDF Jars This tab manages the User-Defined Function jars included in the VDB. Other Files This tab manages the non-model files included in the VDB. Description In this tab, you can set descriptions of the VDB. Properties This tab manages VDB properties. User Properties This tab manages user-defined VDB properties. Translator Overrides This tab manages the overridden translators and their properties. Synchronize All This button synchronizes all VDB entries with their corresponding workspace files. Show Import VDBs This button is enabled if VDB imports exist for a VDB and allows viewing the names and versions of the imported VDBs. Deploy This button deploys the selected VDB to JBoss Data Virtualization. Test This button deploys the selected VDB to JBoss Data Virtualization, creates a Teiid Connection Profile specific for that VDB, opens the Database Development perspective, and creates a connection to your VDB. Save as XML This button generates an XML version of the *.vdb archive and saves it to the workspace or to local file system. You can manage your VDB contents by using the Add or Remove models via the buttons at the right. Set individual model visibility via the Visibility checkbox for each model. This provides low level data access security by removing specific models and their metadata contents from schema exposed in GUI tools. In order for a VDB to be fully queryable the Source Name, Translator, and JNDI Names must have valid values and represent deployed artifacts on your JBoss Data Virtualization server. The Filter bar provides full text search capability in the VDB editor. You can also choose a filter by type. The supported types are: ALL, Source, View, Web, and XML Doc. The X button clears the search bar and returns it to its default state. If you have Teiid Designer runtime plugins installed, and have a JBoss Data Virtualization server running, you can select a source model in the VDB Editor and right-click select Change Translator or Change JNDI Data Source which will allow you to select any applicable artifacts on your server. Figure E.19. Change Translator or Data Source Actions If you have a default JBoss Data Virtualization server instance defined and connected the translator and JNDI table cells will contain drop-down lists of available translator and JNDI names available on that server. E.8.2. Editing Data Roles Teiid Designer provides a means to create, edit and manage data roles specific to a VDB. Once deployed within a JBoss Data Virtualization server with the security option turned on (by default) any query run against this VDB via a Teiid JDBC connection will adhere to the data access permissions defined by the VDB's data roles. The VDB Editor contains a VDB Data Roles section consisting of a List of current data roles and New... , Edit... and Remove action buttons. Figure E.20. VDB Data Roles Panel Clicking New... or Edit... will launch the New VDB Data Role editor dialog. Specify a unique data role name, add a optional description and modify the individual model element CRUD values by selecting or clearing entries in the models section. Figure E.21. VDB Data Roles Tab The Filter bar provides full text search capability in the VDB editor. You can also choose a filter by type. The supported types are: ALL, Source, View, Web, and XML Doc. The X button clears the search bar and returns it to its default state. E.8.3. Editing Translator Overrides Teiid Designer provides a means to create, edit and manage translator override properties specific to a VDB via the Translator Overrides tab. A translator override is a set of non default properties targeted for a specific source model's data source. So each translator override requires a target translator name like oracle, db2, mysql, etc. and a set of non-default key-value property sets. The VDB Editor contains a Translator Overrides section consisting of a List of current translator overrides on the left, a properties editor panel on the right and Add (+) and Remove (-) action buttons on the lower part of the panel. Figure E.22. VDB Translator Overrides Tab To override a specific translator type, click the add translator action (+) . If a default JBoss Data Virtualization server instance is connected and available the Add Translator Override dialog (below) is displayed, select an existing translator type and click OK . Note that the override is only applicable to sources within the VDB, so be sure and select a translator type that corresponds to one of the VDB's source models. The properties panel on the right side of the panel will contain editable cells for each property type based on the datatype of the property. (i.e. boolean, integer, string, etc.). Figure E.23. Add Translator Override Dialog If no default Teiid server instance is available, the Add New Translator Override dialog is displayed. Enter a unique name for the translator override (i.e. oracle_override), a valid translator type name (i.e. oracle) and click OK . The properties panel on the right side of the panel will allow adding, editing and removing key-value string-based property sets. When editing these properties all values will be treated as type string. Figure E.24. Add New Translator Override Dialog
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/sect-VDB_Editor
A.8. vmstat
A.8. vmstat Vmstat outputs reports on your system's processes, memory, paging, block input/output, interrupts, and CPU activity. It provides an instantaneous report of the average of these events since the machine was last booted, or since the report. -a Displays active and inactive memory. -f Displays the number of forks since boot. This includes the fork , vfork , and clone system calls, and is equivalent to the total number of tasks created. Each process is represented by one or more tasks, depending on thread usage. This display does not repeat. -m Displays slab information. -n Specifies that the header will appear once, not periodically. -s Displays a table of various event counters and memory statistics. This display does not repeat. delay The delay between reports in seconds. If no delay is specified, only one report is printed, with the average values since the machine was last booted. count The number of times to report on the system. If no count is specified and delay is defined, vmstat reports indefinitely. -d Displays disk statistics. -p Takes a partition name as a value, and reports detailed statistics for that partition. -S Defines the units output by the report. Valid values are k (1000 bytes), K (1024 bytes), m (1,000,000 bytes), or M (1,048,576 bytes). -D Report summary statistics about disk activity. For detailed information about the output provided by each output mode, see the man page:
[ "man vmstat" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-vmstat
23.2. Configuring a DHCP Server
23.2. Configuring a DHCP Server To configure a DHCP server, the /etc/dhcpd.conf configuration file must be created. A sample file can be found at /usr/share/doc/dhcp-< version >/dhcpd.conf.sample . DHCP also uses the file /var/lib/dhcp/dhcpd.leases to store the client lease database. Refer to Section 23.2.2, "Lease Database" for more information. 23.2.1. Configuration File The first step in configuring a DHCP server is to create the configuration file that stores the network information for the clients. Global options can be declared for all clients, while other options can be declared for individual client systems. The configuration file can contain extra tabs or blank lines for easier formatting. Keywords are case-insensitive and lines beginning with a hash mark (#) are considered comments. Two DNS update schemes are currently implemented - the ad-hoc DNS update mode and the interim DHCP-DNS interaction draft update mode. If and when these two are accepted as part of the Internet Engineering Task Force (IETF) standards process, there will be a third mode - the standard DNS update method. The DHCP server must be configured to use one of the two current schemes. Version 3.0b2pl11 and versions used the ad-hoc mode; however, it has been deprecated. To keep the same behavior, add the following line to the top of the configuration file: To use the recommended mode, add the following line to the top of the configuration file: Refer to the dhcpd.conf man page for details about the different modes. There are two types of statements in the configuration file: Parameters - State how to perform a task, whether to perform a task, or what network configuration options to send to the client. Declarations - Describe the topology of the network, describe the clients, provide addresses for the clients, or apply a group of parameters to a group of declarations. Some parameters must start with the option keyword and are referred to as options. Options configure DHCP options; whereas, parameters configure values that are not optional or control how the DHCP server behaves. Parameters (including options) declared before a section enclosed in curly brackets ({ }) are considered global parameters. Global parameters apply to all the sections below it. Important If the configuration file is changed, the changes do not take effect until the DHCP daemon is restarted with the command service dhcpd restart . Note Instead of changing a DHCP configuration file and restarting the service each time, using the omshell command provides an interactive way to connect to, query, and change the configuration of a DHCP server. By using omshell , all changes can be made while the server is running. For more information on omshell , refer to the omshell man page. In Example 23.1, "Subnet Declaration" , the routers , subnet-mask , domain-name , domain-name-servers , and time-offset options are used for any host statements declared below it. Additionally, a subnet can be declared, a subnet declaration must be included for every subnet in the network. If it is not, the DHCP server fails to start. In this example, there are global options for every DHCP client in the subnet and a range declared. Clients are assigned an IP address within the range . Example 23.1. Subnet Declaration All subnets that share the same physical network should be declared within a shared-network declaration as shown in Example 23.2, "Shared-network Declaration" . Parameters within the shared-network , but outside the enclosed subnet declarations, are considered to be global parameters. The name of the shared-network should be a descriptive title for the network, such as using the title 'test-lab' to describe all the subnets in a test lab environment. Example 23.2. Shared-network Declaration As demonstrated in Example 23.3, "Group Declaration" , the group declaration can be used to apply global parameters to a group of declarations. For example, shared networks, subnets, and hosts can be grouped. Example 23.3. Group Declaration To configure a DHCP server that leases a dynamic IP address to a system within a subnet, modify Example 23.4, "Range Parameter" with your values. It declares a default lease time, maximum lease time, and network configuration values for the clients. This example assigns IP addresses in the range 192.168.1.10 and 192.168.1.100 to client systems. Example 23.4. Range Parameter To assign an IP address to a client based on the MAC address of the network interface card, use the hardware ethernet parameter within a host declaration. As demonstrated in Example 23.5, "Static IP Address using DHCP" , the host apex declaration specifies that the network interface card with the MAC address 00:A0:78:8E:9E:AA always receives the IP address 192.168.1.4. Note that the optional parameter host-name can also be used to assign a host name to the client. Example 23.5. Static IP Address using DHCP Note The sample configuration file provided can be used as a starting point and custom configuration options can be added to it. To copy it to the proper location, use the following command: (where <version-number> is the DHCP version number). For a complete list of option statements and what they do, refer to the dhcp-options man page.
[ "ddns-update-style ad-hoc;", "ddns-update-style interim;", "subnet 192.168.1.0 netmask 255.255.255.0 { option routers 192.168.1.254; option subnet-mask 255.255.255.0; option domain-name \"example.com\"; option domain-name-servers 192.168.1.1; option time-offset -18000; # Eastern Standard Time range 192.168.1.10 192.168.1.100; }", "shared-network name { option domain-name \"test.redhat.com\"; option domain-name-servers ns1.redhat.com, ns2.redhat.com; option routers 192.168.0.254; more parameters for EXAMPLE shared-network subnet 192.168.1.0 netmask 255.255.252.0 { parameters for subnet range 192.168.1.1 192.168.1.254; } subnet 192.168.2.0 netmask 255.255.252.0 { parameters for subnet range 192.168.2.1 192.168.2.254; } }", "group { option routers 192.168.1.254; option subnet-mask 255.255.255.0; option domain-name \"example.com\"; option domain-name-servers 192.168.1.1; option time-offset -18000; # Eastern Standard Time host apex { option host-name \"apex.example.com\"; hardware ethernet 00:A0:78:8E:9E:AA; fixed-address 192.168.1.4; } host raleigh { option host-name \"raleigh.example.com\"; hardware ethernet 00:A1:DD:74:C3:F2; fixed-address 192.168.1.6; } }", "default-lease-time 600; max-lease-time 7200; option subnet-mask 255.255.255.0; option broadcast-address 192.168.1.255; option routers 192.168.1.254; option domain-name-servers 192.168.1.1, 192.168.1.2; option domain-name \"example.com\"; subnet 192.168.1.0 netmask 255.255.255.0 { range 192.168.1.10 192.168.1.100; }", "host apex { option host-name \"apex.example.com\"; hardware ethernet 00:A0:78:8E:9E:AA; fixed-address 192.168.1.4; }", "cp /usr/share/doc/dhcp- <version-number> /dhcpd.conf.sample /etc/dhcpd.conf" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/dynamic_host_configuration_protocol_dhcp-configuring_a_dhcp_server
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_in_external_mode/making-open-source-more-inclusive
4.2.2. Tracking I/O Time For Each File Read or Write
4.2.2. Tracking I/O Time For Each File Read or Write This section describes how to monitor the amount of time it takes for each process to read from or write to any file. This is useful if you wish to determine what files are slow to load on a given system. iotime.stp iotime.stp tracks each time a system call opens, closes, reads from, and writes to a file. For each file any system call accesses, iotime.stp counts the number of microseconds it takes for any reads or writes to finish and tracks the amount of data (in bytes) read from or written to the file. iotime.stp also uses the local variable USDcount to track the amount of data (in bytes) that any system call attempts to read or write. Note that USDreturn (as used in disktop.stp from Section 4.2.1, "Summarizing Disk Read/Write Traffic" ) stores the actual amount of data read/written. USDcount can only be used on probes that track data reads or writes (for example syscall.read and syscall.write ). Example 4.6. iotime.stp Sample Output Example 4.6, "iotime.stp Sample Output" prints out the following data: A timestamp, in microseconds. Process ID and process name. An access or iotime flag. The file accessed. If a process was able to read or write any data, a pair of access and iotime lines should appear together. The access line's timestamp refers to the time that a given process started accessing a file; at the end of the line, it will show the amount of data read/written (in bytes). The iotime line will show the amount of time (in microseconds) that the process took in order to perform the read or write. If an access line is not followed by an iotime line, it simply means that the process did not read or write any data.
[ "global start global entry_io global fd_io global time_io function timestamp:long() { return gettimeofday_us() - start } function proc:string() { return sprintf(\"%d (%s)\", pid(), execname()) } probe begin { start = gettimeofday_us() } global filenames global filehandles global fileread global filewrite probe syscall.open { filenames[pid()] = user_string(USDfilename) } probe syscall.open.return { if (USDreturn != -1) { filehandles[pid(), USDreturn] = filenames[pid()] fileread[pid(), USDreturn] = 0 filewrite[pid(), USDreturn] = 0 } else { printf(\"%d %s access %s fail\\n\", timestamp(), proc(), filenames[pid()]) } delete filenames[pid()] } probe syscall.read { if (USDcount > 0) { fileread[pid(), USDfd] += USDcount } t = gettimeofday_us(); p = pid() entry_io[p] = t fd_io[p] = USDfd } probe syscall.read.return { t = gettimeofday_us(); p = pid() fd = fd_io[p] time_io[p,fd] <<< t - entry_io[p] } probe syscall.write { if (USDcount > 0) { filewrite[pid(), USDfd] += USDcount } t = gettimeofday_us(); p = pid() entry_io[p] = t fd_io[p] = USDfd } probe syscall.write.return { t = gettimeofday_us(); p = pid() fd = fd_io[p] time_io[p,fd] <<< t - entry_io[p] } probe syscall.close { if (filehandles[pid(), USDfd] != \"\") { printf(\"%d %s access %s read: %d write: %d\\n\", timestamp(), proc(), filehandles[pid(), USDfd], fileread[pid(), USDfd], filewrite[pid(), USDfd]) if (@count(time_io[pid(), USDfd])) printf(\"%d %s iotime %s time: %d\\n\", timestamp(), proc(), filehandles[pid(), USDfd], @sum(time_io[pid(), USDfd])) } delete fileread[pid(), USDfd] delete filewrite[pid(), USDfd] delete filehandles[pid(), USDfd] delete fd_io[pid()] delete entry_io[pid()] delete time_io[pid(),USDfd] }", "[...] 825946 3364 (NetworkManager) access /sys/class/net/eth0/carrier read: 8190 write: 0 825955 3364 (NetworkManager) iotime /sys/class/net/eth0/carrier time: 9 [...] 117061 2460 (pcscd) access /dev/bus/usb/003/001 read: 43 write: 0 117065 2460 (pcscd) iotime /dev/bus/usb/003/001 time: 7 [...] 3973737 2886 (sendmail) access /proc/loadavg read: 4096 write: 0 3973744 2886 (sendmail) iotime /proc/loadavg time: 11 [...]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/iotimesect
2.3. Special Considerations for Public Cloud Operators
2.3. Special Considerations for Public Cloud Operators Public cloud service providers are exposed to a number of security risks beyond that of the traditional virtualization user. Virtual guest isolation, both between the host and guest as well as between guests, is critical due to the threat of malicious guests and the requirements on customer data confidentiality and integrity across the virtualization infrastructure. In addition to the Red Hat Enterprise Linux virtualization recommended practices previously listed, public cloud operators should also consider the following items: Disallow any direct hardware access from the guest. PCI, USB, FireWire, Thunderbolt, eSATA, and other device passthrough mechanisms make management difficult and often rely on the underlying hardware to enforce separation between the guests. Isolate the cloud operator's private management network from the customer guest network, and customer networks from one another, so that: The guests cannot access the host systems over the network. One customer cannot access another customer's guest systems directly through the cloud provider's internal network.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_security_guide/sect-virtualization_security_guide-host_security-host_security_recommended_practices_for_red_hat_enterprise_linux-special_considerations_for_public_cloud_operators
probe::scsi.ioexecute
probe::scsi.ioexecute Name probe::scsi.ioexecute - Create mid-layer SCSI request and wait for the result Synopsis scsi.ioexecute Values host_no The host number channel The channel number data_direction The data_direction specifies whether this command is from/to the device. lun The lun number retries Number of times to retry request device_state_str The current state of the device, as a string data_direction_str Data direction, as a string dev_id The scsi device id request_buffer The data buffer address request_bufflen The data buffer buffer length device_state The current state of the device timeout Request timeout in seconds
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-scsi-ioexecute
Chapter 1. Introduction to persistent storage in Red Hat OpenStack Platform (RHOSP)
Chapter 1. Introduction to persistent storage in Red Hat OpenStack Platform (RHOSP) Within Red Hat OpenStack Platform, storage is provided by three main services: Block Storage ( openstack-cinder ) Object Storage ( openstack-swift ) Shared File System Storage ( openstack-manila ) These services provide different types of persistent storage, each with its own set of advantages in different use cases. This guide discusses the suitability of each for general enterprise storage requirements. You can manage cloud storage by using either the RHOSP dashboard or the command-line clients. You can perform most procedures by using either method. However, you can complete some of the more advanced procedures only on the command line. This guide provides procedures for the dashboard where possible. Note For the complete suite of documentation for Red Hat OpenStack Platform, see Red Hat OpenStack Platform Documentation . Important This guide documents the use of crudini to apply some custom service settings. As such, you need to install the crudini package first: RHOSP recognizes two types of storage: ephemeral and persistent . Ephemeral storage is storage that is associated only to a specific Compute instance. Once that instance is terminated, so is its ephemeral storage. This type of storage is useful for basic runtime requirements, such as storing the instance's operating system. Persistent storage, is designed to survive (persist) independent of any running instance. This storage is used for any data that needs to be reused, either by different instances or beyond the life of a specific instance. RHOSP uses the following types of persistent storage: Volumes The OpenStack Block Storage service ( openstack-cinder ) allows users to access block storage devices through volumes . Users can attach volumes to instances in order to augment their ephemeral storage with general-purpose persistent storage. Volumes can be detached and re-attached to instances at will, and can only be accessed through the instance they are attached to. You can also configure instances to not use ephemeral storage. Instead of using ephemeral storage, you can configure the Block Storage service to write images to a volume. You can then use the volume as a bootable root volume for an instance. Volumes also provide inherent redundancy and disaster recovery through backups and snapshots. In addition, you can also encrypt volumes for added security. Containers The OpenStack Object Storage service (openstack-swift) provides a fully-distributed storage solution used to store any kind of static data or binary object, such as media files, large datasets, and disk images. The Object Storage service organizes these objects by using containers. Although the content of a volume can be accessed only through instances, the objects inside a container can be accessed through the Object Storage REST API. As such, the Object Storage service can be used as a repository by nearly every service within the cloud. Shares The Shared File Systems service ( openstack-manila ) provides the means to easily provision remote, shareable file systems, or shares . Shares allow projects within the cloud to openly share storage, and can be consumed by multiple instances simultaneously. Each storage type is designed to address specific storage requirements. Containers are designed for wide access, and as such feature the highest throughput, access, and fault tolerance among all storage types. Container usage is geared more towards services. On the other hand, volumes are used primarily for instance consumption. They do not enjoy the same level of access and performance as containers, but they do have a larger feature set and have more native security features than containers. Shares are similar to volumes in this regard, except that they can be consumed by multiple instances. The following sections discuss each storage type's architecture and feature set in detail, within the context of specific storage criteria. 1.1. Scalability and back-end storage In general, a clustered storage solution provides greater back-end scalability. For example, when you use Red Hat Ceph as a Block Storage (cinder) back end, you can scale storage capacity and redundancy by adding more Ceph Object Storage Daemon (OSD) nodes. Block Storage, Object Storage (swift) and Shared File Systems Storage (manila) services support Red Hat Ceph Storage as a back end. The Block Storage service can use multiple storage solutions as discrete back ends. At the back-end level, you can scale capacity by adding more back ends and restarting the service. The Block Storage service also features a large list of supported back-end solutions, some of which feature additional scalability features. By default, the Object Storage service uses the file system on configured storage nodes, and it can use as much space as is available. The Object Storage service supports the XFS and ext4 file systems, and both can be scaled up to consume as much underlying block storage as is available. You can also scale capacity by adding more storage devices to the storage node. The Shared File Systems service provisions file shares from designated storage pools that are managed by one or more third-party back-end storage systems. You can scale this shared storage by increasing the size or number of storage pools available to the service or by adding more third-party back-end storage systems to the deployment. 1.2. Storage accessibility and administration Volumes are consumed only through instances, and can only be attached to and mounted within one instance at a time. Users can create snapshots of volumes, which they can be used for cloning or restoring a volume to a state. For more information, see Section 1.4, "Storage redundancy and disaster recovery" . As a project administrator, you can use the Block Storage service to create volume types , which aggregate volume settings, such as size and back end. You can associate volume types with Quality of Service (QoS) specifications to provide different levels of performance for your cloud users. Your users can specify the volume type they require when creating new volumes. For example, volumes that use higher performance QoS specifications could provide your users with more IOPS or your users could assign lighter workloads to volumes that use lower performance QoS specifications to conserve resources. Like volumes, shares are consumed through instances. However, shares can be directly mounted within an instance, and do not need to be attached through the dashboard or CLI. Shares can also be mounted by multiple instances simultaneously. The Shared File Systems service also supports share snapshots and cloning; you can also create share types to aggregate settings (similar to volume types). Objects in a container are accessible via API, and can be made accessible to instances and services within the cloud. This makes them ideal as object repositories for services; for example, the Image service ( openstack-glance ) can store its images in containers managed by the Object Storage service. 1.3. Storage security The Block Storage service (cinder) provides basic data security through volume encryption. With this, you can configure a volume type to be encrypted through a static key; the key is then used to encrypt all volumes that are created from the configured volume type. For more information, see Section 2.7, "Block Storage service (cinder) volume encryption" . Object and container security is configured at the service and node level. The Object Storage service (swift) provides no native encryption for containers and objects. Rather, the Object Storage service prioritizes accessibility within the cloud, and as such relies solely on the cloud network security to protect object data. The Shared File Systems service (manila) can secure shares through access restriction, whether by instance IP, user or group, or TLS certificate. In addition, some Shared File Systems service deployments can feature separate share servers to manage the relationship between share networks and shares; some share servers support, or even require, additional network security. For example, a CIFS share server requires the deployment of an LDAP, Active Directory, or Kerberos authentication service. For more information about how to secure the Image service (glance), such as image signing and verification and metadata definition (metadef) API restrictions, see The Image service (glance) in Creating and Managing Images . 1.4. Storage redundancy and disaster recovery The Block Storage service (cinder) features volume backup and restoration, which provides basic disaster recovery for user storage. Use backups to protect volume contents. The service also supports snapshots. In addition to cloning, you can use snapshots to restore a volume to a state. In a multi-back end environment, you can also migrate volumes between back ends. This is useful if you need to take a back end offline for maintenance. Backups are typically stored in a storage back end separate from their source volumes to help protect the data. This is not possible with snapshots because snapshots are dependent on their source volumes. The Block Storage service also supports the creation of consistency groups to group volumes together for simultaneous snapshot creation. This provides a greater level of data consistency across multiple volumes. For more information, see Section 2.9, "Block Storage service (cinder) consistency groups" . The Object Storage service (swift) provides no built-in backup features. You must perform all backups at the file system or node level. The Object Storage service features more robust redundancy and fault tolerance, even the most basic deployment of the Object Storage service replicates objects multiple times. You can use failover features like dm-multipath to enhance redundancy. The Shared File Systems service provides no built-in backup features for shares, but it does allow you to create snapshots for cloning and restoration.
[ "dnf install crudini -y" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/storage_guide/assembly_introduction-to-persistent-storage-in-rhosp_osp-storage-guide
A.2. Installing cURL
A.2. Installing cURL A Red Hat Enterprise Linux user installs cURL with the following terminal command: yum install curl For other platforms, seek installation instructions on the cURL website ( http://curl.haxx.se/ ).
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/installing_curl
2.3. Expressions
2.3. Expressions 2.3.1. Expressions Identifiers, literals, and functions can be combined into expressions. Expressions can be used almost anywhere in a query -- SELECT, FROM (if specifying join criteria), WHERE, GROUP BY, HAVING, or ORDER BY. JBoss Data Virtualization supports the following types of expressions: Column identifiers Refer to Section 2.3.2, "Column Identifiers" . Literals Refer to Section 2.3.3, "Literals" . Aggregate functions Refer to Section 2.3.4, "Aggregate Functions" . Window functions Refer to Section 2.3.5, "Window Functions" . Case and searched case Refer to Section 2.3.8, "Case and Searched Case" . Scalar subqueries Refer to Section 2.3.9, "Scalar Subqueries" . Parameter references Refer to Section 2.3.10, "Parameter References" . Criteria Refer to Section 2.3.11, "Criteria" . 2.3.2. Column Identifiers Column identifiers are used to specify the output columns in SELECT statements, the columns and their values for INSERT and UPDATE statements, and criteria used in WHERE and FROM clauses. They are also used in GROUP BY, HAVING, and ORDER BY clauses. The syntax for column identifiers is defined in Section 2.2.1, "Identifiers" . 2.3.3. Literals Literal values represent fixed values. These can be any of the standard data types. See Section 3.1, "Supported Types" . Syntax Rules: Integer values will be assigned an integral data type big enough to hold the value (integer, long, or biginteger). Floating point values will always be parsed as a double. The keyword 'null' is used to represent an absent or unknown value and is inherently untyped. In many cases, a null literal value will be assigned an implied type based on context. For example, in the function '5 + null', the null value will be assigned the type 'integer' to match the type of the value '5'. A null literal used in the SELECT clause of a query with no implied context will be assigned to type 'string'. Some examples of simple literal values are: 'abc' 'isn''t true' - use an extra single tick to escape a tick in a string with single ticks 5 -37.75e01 - scientific notation 100.0 - parsed as BigDecimal true false '\u0027' - unicode character 2.3.4. Aggregate Functions Aggregate functions take sets of values from a group produced by an explicit or implicit GROUP BY and return a single scalar value computed from the group. JBoss Data Virtualization supports the following aggregate functions: COUNT(*) - count the number of values (including nulls and duplicates) in a group COUNT(x) - count the number of values (excluding nulls) in a group SUM(x) - sum of the values (excluding nulls) in a group AVG(x) - average of the values (excluding nulls) in a group MIN(x) - minimum value in a group (excluding null) MAX(x) - maximum value in a group (excluding null) ANY(x) / SOME(x) - returns TRUE if any value in the group is TRUE (excluding null) EVERY(x) - returns TRUE if every value in the group is TRUE (excluding null) VAR_POP(x) - biased variance (excluding null) logically equals (sum(x^2) - sum(x)^2/count(x))/count(x); returns a double; null if count = 0 VAR_SAMP(x) - sample variance (excluding null) logically equals (sum(x^2) - sum(x)^2/count(x))/(count(x) - 1); returns a double; null if count < 2 STDDEV_POP(x) - standard deviation (excluding null) logically equals SQRT(VAR_POP(x)) STDDEV_SAMP(x) - sample standard deviation (excluding null) logically equals SQRT(VAR_SAMP(x)) TEXTAGG(FOR (expression [as name], ... [DELIMITER char] [QUOTE char] [HEADER] [ENCODING id] [ORDER BY ...]) - CSV text aggregation of all expressions in each row of a group. When DELIMITER is not specified, by default comma (,) is used as delimiter. Double quotes(") is the default quote character. Use QUOTE to specify a different value. All non-null values will be quoted. If HEADER is specified, the result contains the header row as the first line. The header line will be present even if there are no rows in a group. This aggregation returns a BLOB. See Section 2.6.15, "ORDER BY Clause" . Example: XMLAGG(xml_expr [ORDER BY ...]) - XML concatenation of all XML expressions in a group (excluding null). The ORDER BY clause cannot reference alias names or use positional ordering. See Section 2.6.15, "ORDER BY Clause" . JSONARRAY_AGG(x [ORDER BY ...]) - creates a JSON array result as a CLOB including null value. The ORDER BY clause cannot reference alias names or use positional ordering. Also see Section 2.4.15, "JSON Functions" . Integer value example: could return STRING_AGG(x, delim) - creates a lob results from the concatenation of x using the delimiter delim. If either argument is null, no value is concatenated. Both arguments are expected to be character (string/clob) or binary (varbinary, blob) and the result will be clob or blob respectively. DISTINCT and ORDER BY are allowed in STRING_AGG. Example: could return agg([DISTINCT|ALL] arg ... [ORDER BY ...]) - this is a user-defined aggregate function. ARRAY_AGG(x [ORDER BY ...]) - This creates an array with a base type matching the expression x. The ORDER BY clause cannot reference alias names or use positional ordering. Syntax Rules: Some aggregate functions may contain the keyword 'DISTINCT' before the expression, indicating that duplicate expression values should be ignored. DISTINCT is not allowed in COUNT(*) and is not meaningful in MIN or MAX (result would be unchanged), so it can be used in COUNT, SUM, and AVG. Aggregate functions cannot be used in FROM, GROUP BY, or WHERE clauses without an intervening query expression. Aggregate functions cannot be nested within another aggregate function without an intervening query expression. Aggregate functions may be nested inside other functions. Any aggregate function may take an optional FILTER clause of the following form: The condition may be any boolean value expression that does not contain a subquery or a correlated variable. The filter will logically be evaluated for each row prior to the grouping operation. If false, the aggregate function will not accumulate a value for the given row. User defined aggregate functions need ALL specified if no other aggregate specific constructs are used to distinguish the function as an aggregate rather than normal function. For more information on aggregates, refer to Section 2.6.13, "GROUP BY Clause" and Section 2.6.14, "HAVING Clause" . 2.3.5. Window Functions JBoss Data Virtualization supports ANSI SQL 2003 window functions. A window function allows an aggregate function to be applied to a subset of the result set, without the need for a GROUP BY clause. A window function is similar to an aggregate function, but requires the use of an OVER clause or window specification. Usage: In the above example, aggregate can be any of those in Section 2.3.4, "Aggregate Functions" . Ranking can be one of ROW_NUMBER(), RANK(), DENSE_RANK(). Syntax Rules: Window functions can only appear in the SELECT and ORDER BY clauses of a query expression. Window functions cannot be nested in one another. Partitioning and ORDER BY expressions cannot contain subqueries or outer references. The ranking (ROW_NUMBER, RANK, DENSE_RANK) functions require the use of the window specification ORDER BY clause. An XMLAGG ORDER BY clause cannot be used when windowed. The window specification ORDER BY clause cannot reference alias names or use positional ordering. Windowed aggregates may not use DISTINCT if the window specification is ordered. 2.3.6. Window Functions: Analytical Function Definitions ROW_NUMBER() - functionally the same as COUNT(*) with the same window specification. Assigns a number to each row in a partition starting at 1. RANK() - Assigns a number to each unique ordering value within each partition starting at 1, such that the rank is equal to the count of prior rows. DENSE_RANK() - Assigns a number to each unique ordering value within each partition starting at 1, such that the rank is sequential. 2.3.7. Window Functions: Processing Window functions are logically processed just before creating the output from the SELECT clause. Window functions can use nested aggregates if a GROUP BY clause is present. There is no guaranteed effect on the output ordering from the presence of window functions. The SELECT statement must have an ORDER BY clause to have a predictable ordering. JBoss Data Virtualization will process all window functions with the same window specification together. In general, a full pass over the row values coming into the SELECT clause will be required for each unique window specification. For each window specification the values will be grouped according to the PARTITION BY clause. If no PARTITION BY clause is specified, then the entire input is treated as a single partition. The output value is determined based upon the current row value, its peers (that is rows that are the same with respect to their ordering), and all prior row values based upon ordering in the partition. The ROW_NUMBER function will assign a unique value to every row regardless of the number of peers. Example windowed results: SELECT name, salary, max(salary) over (partition by name) as max_sal, rank() over (order by salary) as rank, dense_rank() over (order by salary) as dense_rank, row_number() over (order by salary) as row_num FROM employees name salary max_sal rank dense_rank row_num John 100000 100000 2 2 2 Henry 50000 100000 5 4 5 John 60000 60000 3 3 3 Suzie 60000 150000 3 3 4 Suzie 150000 150000 1 1 1 2.3.8. Case and Searched Case JBoss Data Virtualization supports two forms of the CASE expression which allows conditional logic in a scalar expression. Supported forms: CASE <expr> ( WHEN <expr> THEN <expr>)+ [ELSE expr] END CASE ( WHEN <criteria> THEN <expr>)+ [ELSE expr] END Each form allows for an output based on conditional logic. The first form starts with an initial expression and evaluates WHEN expressions until the values match, and outputs the THEN expression. If no WHEN is matched, the ELSE expression is output. If no WHEN is matched and no ELSE is specified, a null literal value is output. The second form (the searched case expression) searches the WHEN clauses, which specify an arbitrary criteria to evaluate. If any criteria evaluates to true, the THEN expression is evaluated and output. If no WHEN is true, the ELSE is evaluated or NULL is output if none exists. 2.3.9. Scalar Subqueries Subqueries can be used to produce a single scalar value in the SELECT, WHERE, or HAVING clauses only. A scalar subquery must have a single column in the SELECT clause and should return either 0 or 1 row. If no rows are returned, null will be returned as the scalar subquery value. For other types of subqueries, refer to Section 2.5.10, "Subqueries" . 2.3.10. Parameter References Parameters are specified using a '?' symbol. Parameters may only be used with prepared statements or callable statements in JDBC. Each parameter is linked to a value specified by a one-based index in the JDBC API. 2.3.11. Criteria Criteria may be: Predicates that evaluate to true or false Logical criteria that combines criteria (AND, OR, NOT) A value expression with type boolean Usage: LIKE matches the string expression against the given string pattern. The pattern may contain % to match any number of characters and _ to match any single character. The escape character can be used to escape the match characters % and _. SIMILAR TO is a cross between LIKE and standard regular expression syntax. % and _ are still used, rather than .* and . respectively. Note JBoss Data Virtualization does not exhaustively validate SIMILAR TO pattern values. Rather, the pattern is converted to an equivalent regular expression. Care should be taken not to rely on general regular expression features when using SIMILAR TO. If additional features are needed, then LIKE_REGEX should be used. Usage of a non-literal pattern is discouraged as pushdown support is limited. LIKE_REGEX allows for standard regular expression syntax to be used for matching. This differs from SIMILAR TO and LIKE in that the escape character is no longer used (\ is already the standard escape mechansim in regular expressions and % and _ have no special meaning. The runtime engine uses the JRE implementation of regular expressions - see the java.util.regex.Pattern class for details. Important JBoss Data Virtualization does not exhaustively validate LIKE_REGEX pattern values. It is possible to use JRE only regular expression features that are not specified by the SQL specification. Additionally, not all sources support the same regular expression syntax or extensions. Care should be taken in pushdown situations to ensure that the pattern used will have the same meaning in JBoss Data Virtualization and across all applicable sources. JBoss Data Virtualization converts BETWEEN into the equivalent form expression >= minExpression AND expression <= maxExpression. Where expression has type boolean. Syntax Rules: The precedence ordering from lowest to highest is: comparison, NOT, AND, OR. Criteria nested by parenthesis will be logically evaluated prior to evaluating the parent criteria. Some examples of valid criteria are: (balance > 2500.0) 100*(50 - x)/(25 - y) > z concat(areaCode,concat('-',phone)) LIKE '314%1' Note Null values represent an unknown value. Comparison with a null value will evaluate to 'unknown', which can never be true even if 'not' is used. 2.3.12. Operator Precedence JBoss Data Virtualization parses and evaluates operators with higher precedence before those with lower precedence. Operators with equal precedence are left associative. The following operator precedence is listed from highest to lowest: Operator Description +,- positive/negative value expression *,/ multiplication/division +,- addition/subtraction || concat criteria see Section 2.3.11, "Criteria" 2.3.13. Criteria Precedence JBoss Data Virtualization parses and evaluates conditions with higher precedence before those with lower precedence. Conditions with equal precedence are left associative. The following condition precedence is listed from highest to lowest: Condition Description SQL operators See Section 2.3.1, "Expressions" EXISTS, LIKE, SIMILAR TO, LIKE_REGEX, BETWEEN, IN, IS NULL, <, <=, >, >=, =, <> comparison NOT negation AND conjunction OR disjunction Note however that to prevent lookaheads the parser does not accept all possible criteria sequences. For example "a = b is null" is not accepted, since by the left associative parsing we first recognize "a =", then look for a common value expression. "b is null" is not a valid common value expression. Thus nesting must be used, for example "(a = b) is null". See BNF for SQL Grammar for all parsing rules.
[ "TEXTAGG(col1, col2 as name DELIMITER '|' HEADER ORDER BY col1)", "jsonArray_Agg(col1 order by col1 nulls first)", "[null,null,1,2,3]", "string_agg(col1, ',' ORDER BY col1 ASC)", "'a,b,c'", "FILTER ( WHERE condition )", "aggregate |ranking OVER ([PARTITION BY expression [, expression]*] [ORDER BY ...])", "SELECT name, salary, max(salary) over (partition by name) as max_sal, rank() over (order by salary) as rank, dense_rank() over (order by salary) as dense_rank, row_number() over (order by salary) as row_num FROM employees", "criteria AND|OR criteria", "NOT criteria", "(criteria)", "expression (=|<>|!=|<|>|<=|>=) (expression|((ANY|ALL|SOME) subquery|(array_expression)))", "expression [NOT] IS NULL", "expression [NOT] IN (expression[,expression]*)|subquery", "expression [NOT] LIKE pattern [ESCAPE char]", "expression [NOT] SIMILAR TO pattern [ESCAPE char]", "expression [NOT] LIKE_REGEX pattern", "EXISTS(subquery)", "expression [NOT] BETWEEN minExpression AND maxExpression", "expression" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-Expressions
Chapter 1. Project APIs
Chapter 1. Project APIs 1.1. Project [project.openshift.io/v1] Description Projects are the unit of isolation and collaboration in OpenShift. A project has one or more members, a quota on the resources that the project may consume, and the security controls on the resources in the project. Within a project, members may have different roles - project administrators can set membership, editors can create and manage the resources, and viewers can see but not access running containers. In a normal cluster project administrators are not able to alter their quotas - that is restricted to cluster administrators. Listing or watching projects will return only projects the user has the reader role on. An OpenShift project is an alternative representation of a Kubernetes namespace. Projects are exposed as editable to end users while namespaces are not. Direct creation of a project is typically restricted to administrators, while end users should use the requestproject resource. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. ProjectRequest [project.openshift.io/v1] Description ProjectRequest is the set of options necessary to fully qualify a project request Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/project_apis/project-apis
Chapter 5. Migrating applications secured by Red Hat Single Sign-On 7.6
Chapter 5. Migrating applications secured by Red Hat Single Sign-On 7.6 Red Hat build of Keycloak introduces key changes to how applications are using some of the Red Hat Single Sign-On 7.6 Client Adapters. In addition to no longer releasing some client adapters, Red Hat build of Keycloak also introduces fixes and improvements that impact how client applications use OpenID Connect and SAML protocols. In this chapter, you will find the instructions to address these changes and migrate your application to integrate with Red Hat build of Keycloak . 5.1. Migrating OpenID Connect Clients The following Java Client OpenID Connect Adapters are no longer released starting with this release of Red Hat build of Keycloak Red Hat JBoss Enterprise Application Platform 6.x Red Hat JBoss Enterprise Application Platform 7.x Spring Boot Red Hat Fuse Compared to when these adapters were first released, OpenID Connect is now widely available across the Java Ecosystem. Also, much better interoperability and support is achieved by using the capabilities available from the technology stack, such as your application server or framework. These adapters have reached their end of life and are only available from Red Hat Single Sign-On 7.6. It is highly recommended to look for alternatives to keep your applications updated with the latest updates from OAuth2 and OpenID connect protocols. 5.1.1. Key changes in OpenID Connect protocol and client settings 5.1.1.1. Access Type client option no longer available When you create or update an OpenID Connect client, Access Type is no longer available. However, you can use other methods to achieve this capability. To achieve the Bearer Only capability, create a client with no authentication flow. In the Capability config section of the client details, make sure that no flow is selected. The client cannot obtain any tokens from Keycloak, which is equivalent to using the Bearer Only access type. To achieve the Public capability, make sure that client authentication is disabled for this client and at least one flow is enabled. To achieve Confidential capability, make sure that Client Authentication is enabled for the client and at least one flow is enabled. The boolean flags bearerOnly and publicClient still exist on the client JSON object. They can be used when creating or updating a client by the admin REST API or when importing this client by partial import or realm import. However, these options are not directly available in the Admin Console v2. 5.1.1.2. Changes in validating schemes for valid redirect URIs If an application client is using non http(s) custom schemes, the validation now requires that a valid redirect pattern explicitly allows that scheme. Example patterns for allowing custom scheme are custom:/test, custom:/test/* or custom:. For security reasons, a general pattern such as * no longer covers them. 5.1.1.3. Support for the client_id parameter in OpenID Connect Logout Endpoint Support for the client_id parameter, which is based on the OIDC RP-Initiated Logout 1.0 specification. This capability is useful to detect what client should be used for Post Logout Redirect URI verification in case that id_token_hint parameter cannot be used. The logout confirmation screen still needs to be displayed to the user when only the client_id parameter is used without parameter id_token_hint , so clients are encouraged to use id_token_hint parameter if they do not want the logout confirmation screen to be displayed to the user. 5.1.2. Valid Post Logout Redirect URIs The Valid Post Logout Redirect URIs configuration option is added to the OIDC client and is aligned with the OIDC specification. You can use a different set of redirect URIs for redirection after login and logout. The value + used for Valid Post Logout Redirect URIs means that the logout uses the same set of redirect URIs as specified by the option of Valid Redirect URIs . This change also matches the default behavior when migrating from a version due to backwards compatibility. 5.1.2.1. UserInfo Endpoint Changes 5.1.2.1.1. Error response changes The UserInfo endpoint is now returning error responses fully compliant with RFC 6750 (The OAuth 2.0 Authorization Framework: Bearer Token Usage). Error code and description (if available) are provided as WWW-Authenticate challenge attributes rather than JSON object fields. The responses will be the following, depending on the error condition: In case no access token is provided: 401 Unauthorized WWW-Authenticate: Bearer realm="myrealm" In case several methods are used simultaneously to provide an access token (for example, Authorization header + POST access_token parameter), or POST parameters are duplicated: 400 Bad Request WWW-Authenticate: Bearer realm="myrealm", error="invalid_request", error_description="..." In case an access token is missing openid scope: 403 Forbidden WWW-Authenticate: Bearer realm="myrealm", error="insufficient_scope", error_description="Missing openid scope" In case of inability to resolve cryptographic keys for UserInfo response signing/encryption: 500 Internal Server Error In case of a token validation error, a 401 Unauthorized is returned in combination with the invalid_token error code. This error includes user and client related checks and actually captures all the remaining error cases: 401 Unauthorized WWW-Authenticate: Bearer realm="myrealm", error="invalid_token", error_description="..." 5.1.2.1.2. Other Changes to the UserInfo endpoint It is now required for access tokens to have the openid scope, which is stipulated by UserInfo being a feature specific to OpenID Connect and not OAuth 2.0. If the openid scope is missing from the token, the request will be denied as 403 Forbidden . See the preceding section. UserInfo now checks the user status, and returns the invalid_token response if the user is disabled. 5.1.2.1.3. Change of the default Client ID mapper of Service Account Client. Default Client ID mapper of Service Account Client has been changed. Token Claim Name field value has been changed from clientId to client_id . client_id claim is compliant with OAuth2 specifications: JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens OAuth 2.0 Token Introspection OAuth 2.0 Token Exchange clientId userSession note still exists. 5.1.2.1.4. Added iss parameter to OAuth 2.0/OpenID Connect Authentication Response RFC 9207 OAuth 2.0 Authorization Server Issuer Identification specification adds the parameter iss in the OAuth 2.0/OpenID Connect Authentication Response for realizing secure authorization responses. In past releases, we did not have this parameter, but now Red Hat build of Keycloak adds this parameter by default, as required by the specification. However, some OpenID Connect / OAuth2 adapters, and especially older Red Hat build of Keycloak adapters, may have issues with this new parameter. For example, the parameter will be always present in the browser URL after successful authentication to the client application. In these cases, it may be useful to disable adding the iss parameter to the authentication response. This can be done for the particular client in the Admin Console, in client details in the section with OpenID Connect Compatibility Modes . You can enable Exclude Issuer From Authentication Response to prevent adding the iss parameter to the authentication response. 5.2. Migrating Red Hat JBoss Enterprise Application Platform applications 5.2.1. Red Hat JBoss Enterprise Application Platform 8.x Your applications no longer need any additional dependency to integrate with Red Hat build of Keycloak or any other OpenID Provider. Instead, you can leverage the OpenID Connect support from the JBoss EAP native OpenID Connect Client. For more information, take a look at OpenID Connect in JBoss EAP . The JBoss EAP native adapter relies on a configuration schema very similar to the Red Hat build of Keycloak Adapter JSON Configuration. For instance, a deployment using a keycloak.json configuration file can be mapped to the following configuration in JBoss EAP: { "realm": "quickstart", "auth-server-url": "http://localhost:8180", "ssl-required": "external", "resource": "jakarta-servlet-authz-client", "credentials": { "secret": "secret" } } For examples about integrating Jakarta-based applications using the JBoss EAP native adapter with Red Hat build of Keycloak, see the following examples at the Red Hat build of Keycloak Quickstart Repository: JAX-RS Resource Server Servlet Application It is strongly recommended to migrate to JBoss EAP native OpenID Connect client as it is the best candidate for Jakarta applications deployed to JBoss EAP 8 and newer. 5.2.2. Red Hat JBoss Enterprise Application Platform 7.x As Red Hat JBoss Enterprise Application Platform 7.x is close to ending full support, Red Hat build of Keycloak will not provide support for it. For existing applications deployed to Red Hat JBoss Enterprise Application Platform 7.x adapters with maintenance support are available through Red Hat Single Sign-On 7.6. Red Hat Single Sign-On 7.6 adapters are supported to be used in combination with the Red Hat build of Keycloak 22.0 server. 5.2.3. Red Hat JBoss Enterprise Application Platform 6.x As Red Hat JBoss Enterprise Application PlatformJBoss EAP 6.x has reached end of maintenance support, going forward neither Red Hat Single Sign-On 7.6 or Red Hat build of Keycloak will provide support for it. 5.3. Migrating Spring Boot applications The Spring Framework ecosystem is evolving fast and you should have a much better experience by leveraging the OpenID Connect support already available there. Your applications no longer need any additional dependency to integrate with Red Hat build of Keycloak or any other OpenID Provider but rely on the comprehensive OAuth2/OpenID Connect support from Spring Security. For more information, see OAuth2/OpenID Connect support from Spring Security . In terms of capabilities, it provides a standard-based OpenID Connect client implementation. An example of a capability that you might want to review, if not already using the standard protocols, is Logout . Red Hat build of Keycloak provides full support for standard-based logout protocols from the OpenID Connect ecosystem. For examples of how to integrate Spring Security applications with Red Hat build of Keycloak, see the Keycloak Quickstart Repository . If migrating from the Red Hat build of Keycloak Client Adapter for Spring Boot is not an option, you still have access to the adapter from Red Hat Single Sign-On 7.6, which is now in maintenance only support. Red Hat Single Sign-On 7.6 adapters are supported to be used in combination with the Red Hat build of Keycloak 22.0 server. 5.4. Migrating Red Hat Fuse applications As Red Hat Fuse has reached the end of full support, Red Hat build of Keycloak 22.0 will not provide any support for it. Red Hat Fuse adapters are still available with maintenance support through Red Hat Single Sign-On 7.6. Red Hat Single Sign-On 7.6 adapters are supported to be used in combination with the Red Hat build of Keycloak 22.0 server. 5.5. Migrating Applications Using the Authorization Services Policy Enforcer To support integration with the Red Hat build of Keycloak Authorization Services, the policy enforcer is available separately from the Java Client Adapters. <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-policy-enforcer</artifactId> <version>USD{Red Hat build of Keycloak .version}</version> </dependency> By decoupling it from the Java Client Adapters, it is possible now to integrate Red Hat build of Keycloak to any Java technology that provides built-in support for OAuth2 or OpenID Connect. The Red Hat build of Keycloak Policy Enforcer provides built-in support for the following types of applications: Servlet Application Using Fine-grained Authorization Spring Boot REST Service Protected Using Red Hat build of Keycloak Authorization Services For integration of the Red Hat build of Keycloak Policy Enforcer with different types of applications, consider the following examples: Servlet Application Using Fine-grained Authorization Spring Boot REST Service Protected Using Keycloak Authorization Services If migrating from the Red Hat Single Sign-On 7.6 Java Adapter you are using is not an option, you still have access to the adapter from Red Hat Single Sign-On 7.6, which is now in maintenance support. Red Hat Single Sign-On 7.6 adapters are supported to be used in combination with the Red Hat build of Keycloak 22.0 server. Additional resources Policy enforcers 5.6. Migrating Single Page Applications (SPA) using the Red Hat build of Keycloak JS Adapter To migrate applications secured with the Red Hat Single Sign-On 7.6 adapter, upgrade to Red Hat build of Keycloak 22.0, which provides a more recent version of the adapter. Depending on how it is used, there are some minor changes needed, which are described below. 5.6.1. Legacy Promise API removed With this release, the legacy Promise API methods from the Red Hat build of Keycloak JS adapter is removed. This means that calling .success() and .error() on promises returned from the adapter is no longer possible. 5.6.2. Required to be instantiated with the new operator In a release, deprecation warnings were logged when the Red Hat build of Keycloak JS adapter is constructed without the new operator. Starting with this release, doing so will throw an exception instead. This change is to align with the expected behavior of JavaScript classes , which will allow further refactoring of the adapter in the future. To migrate applications secured with the Red Hat Single Sign-On 7.6 adapter, upgrade to Red Hat build of Keycloak 22.0, which provides a more recent version of the adapter. 5.7. Migrating SAML applications 5.7.1. Migrating Red Hat JBoss Enterprise Application Platform applications 5.7.1.1. Red Hat JBoss Enterprise Application Platform 8.x Red Hat build of Keycloak 22.0 includes client adapters for Red Hat JBoss Enterprise Application Platform 8.x, including support for Jakarta EE. 5.7.1.2. Red Hat JBoss Enterprise Application Platform 7.x As Red Hat JBoss Enterprise Application Platform 7.x is close to ending full support, Red Hat build of Keycloak will not provide support for it. For existing applications deployed to Red Hat JBoss Enterprise Application Platform 7.x adapters with maintenance support are available through Red Hat Single Sign-On 7.6. Red Hat Single Sign-On 7.6 adapters are supported to be used in combination with the Red Hat build of Keycloak 22.0 server. 5.7.1.3. Red Hat JBoss Enterprise Application Platform 6.x As Red Hat JBoss Enterprise Application PlatformJBoss EAP 6.x has reached end of maintenance support, going forward neither Red Hat Single Sign-On 7.6 or Red Hat build of Keycloak will provide support for it.. 5.7.2. Key changes in SAML protocol and client settings 5.7.2.1. SAML SP metadata changes Prior to this release, SAML SP metadata contained the same key for both signing and encryption use. Starting with this version of Keycloak, we include only encryption intended realm keys for encryption use in SP metadata. For each encryption key descriptor we also specify the algorithm that it is supposed to be used with. The following table shows the supported XML-Enc algorithms with the mapping to Red Hat build of Keycloak realm keys. XML-Enc algorithm Realm key algorithm rsa-oaep-mgf1p RSA-OAEP rsa-1_5 RSA1_5 Additional resources Keycloak Upgrading Guide 5.7.2.2. Deprecated RSA_SHA1 and DSA_SHA1 algorithms for SAML Algorithms RSA_SHA1 and DSA_SHA1 , which can be configured as Signature algorithms on SAML adapters, clients and identity providers are deprecated. We recommend to use safer alternatives based on SHA256 or SHA512 . Also, verifying signatures on signed SAML documents or assertions with these algorithms do not work on Java 17 or higher. If you use this algorithm and the other party consuming your SAML documents is running on Java 17 or higher, verifying signatures will not work. The possible workaround is to remove algorithms such as the following: http://www.w3.org/2000/09/xmldsig#rsa-sha1 or http://www.w3.org/2000/09/xmldsig#dsa-sha1 from the list "disallowed algorithms" configured on property jdk.xml.dsig.secureValidationPolicy in the file USDJAVA_HOME/conf/security/java.security
[ "401 Unauthorized WWW-Authenticate: Bearer realm=\"myrealm\"", "400 Bad Request WWW-Authenticate: Bearer realm=\"myrealm\", error=\"invalid_request\", error_description=\"...\"", "403 Forbidden WWW-Authenticate: Bearer realm=\"myrealm\", error=\"insufficient_scope\", error_description=\"Missing openid scope\"", "500 Internal Server Error", "401 Unauthorized WWW-Authenticate: Bearer realm=\"myrealm\", error=\"invalid_token\", error_description=\"...\"", "{ \"realm\": \"quickstart\", \"auth-server-url\": \"http://localhost:8180\", \"ssl-required\": \"external\", \"resource\": \"jakarta-servlet-authz-client\", \"credentials\": { \"secret\": \"secret\" } }", "<dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-policy-enforcer</artifactId> <version>USD{Red Hat build of Keycloak .version}</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/migration_guide/migrating-applications
2.4. Cross site failover for Hot Rod client
2.4. Cross site failover for Hot Rod client In JDG 6.6, if cross datacenter replication is configured for JDG Server, then a Java Hot Rod client application can be configured to failover to the backup JDG cluster if the primary cluster becomes unavailable. Switching between sites can also be done programmatically. This enhancement is applicable to Client-Server mode. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.0_release_notes/cross_site_failover_for_hot_rod_client
Chapter 11. Using service accounts in applications
Chapter 11. Using service accounts in applications 11.1. Service accounts overview A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. For example, service accounts can allow: Replication controllers to make API calls to create or delete pods. Applications inside containers to make API calls for discovery purposes. External applications to make API calls for monitoring or integration purposes. Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. Each service account automatically contains two secrets: An API token Credentials for the OpenShift Container Registry The generated API token and registry credentials do not expire, but you can revoke them by deleting the secret. When you delete the secret, a new one is automatically generated to take its place. 11.2. Default service accounts Your OpenShift Container Platform cluster contains default service accounts for cluster management and generates more service accounts for each project. 11.2.1. Default cluster service accounts Several infrastructure controllers run using service account credentials. The following service accounts are created in the OpenShift Container Platform infrastructure project ( openshift-infra ) at server start, and given the following roles cluster-wide: Service account Description replication-controller Assigned the system:replication-controller role deployment-controller Assigned the system:deployment-controller role build-controller Assigned the system:build-controller role. Additionally, the build-controller service account is included in the privileged security context constraint to create privileged build pods. 11.2.2. Default project service accounts and roles Three service accounts are automatically created in each project: Service account Usage builder Used by build pods. It is given the system:image-builder role, which allows pushing images to any imagestream in the project using the internal Docker registry. deployer Used by deployment pods and given the system:deployer role, which allows viewing and modifying replication controllers and pods in the project. Note The deployer service account is not created if the DeploymentConfig cluster capability is not enabled. default Used to run all other pods unless they specify a different service account. All service accounts in a project are given the system:image-puller role, which allows pulling images from any image stream in the project using the internal container image registry. 11.2.3. Automatically generated secrets By default, OpenShift Container Platform creates the following secrets for each service account: A dockercfg image pull secret A service account token secret Note Prior to OpenShift Container Platform 4.11, a second service account token secret was generated when a service account was created. This service account token secret was used to access the Kubernetes API. Starting with OpenShift Container Platform 4.11, this second service account token secret is no longer created. This is because the LegacyServiceAccountTokenNoAutoGeneration upstream Kubernetes feature gate was enabled, which stops the automatic generation of secret-based service account tokens to access the Kubernetes API. After upgrading to 4.15, any existing service account token secrets are not deleted and continue to function. This service account token secret and docker configuration image pull secret are necessary to integrate the OpenShift image registry into the cluster's user authentication and authorization system. However, if you do not enable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, these secrets are not generated for each service account. Warning Do not rely on these automatically generated secrets for your own use; they might be removed in a future OpenShift Container Platform release. Workloads are automatically injected with a projected volume to obtain a bound service account token. If your workload needs an additional service account token, add an additional projected volume in your workload manifest. Bound service account tokens are more secure than service account token secrets for the following reasons: Bound service account tokens have a bounded lifetime. Bound service account tokens contain audiences. Bound service account tokens can be bound to pods or secrets and the bound tokens are invalidated when the bound object is removed. For more information, see Configuring bound service account tokens using volume projection . You can also manually create a service account token secret to obtain a token, if the security exposure of a non-expiring token in a readable API object is acceptable to you. For more information, see Creating a service account token secret . Additional resources For information about requesting bound service account tokens, see Configuring bound service account tokens using volume projection . For information about creating a service account token secret, see Creating a service account token secret . 11.3. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none>
[ "system:serviceaccount:<project>:<name>", "oc get sa", "NAME SECRETS AGE builder 2 2d default 2 2d deployer 2 2d", "oc create sa <service_account_name> 1", "serviceaccount \"robot\" created", "apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>", "oc describe sa robot", "Name: robot Namespace: project1 Labels: <none> Annotations: <none> Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: robot-token-f4khf Events: <none>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/authentication_and_authorization/using-service-accounts
Chapter 1. Custom triggers
Chapter 1. Custom triggers The Cryostat 3.0 agent supports custom triggers that are based on MBean metric values. You can configure the Cryostat agent to start JFR recordings dynamically when these custom trigger conditions are met. You can define a custom trigger condition that dynamically starts a JFR recording when this condition is met. A custom trigger condition is based on MBean counters that can cover a range of runtime, memory, thread, and operating system metrics. You can include one or more MBean counter types as part of the custom trigger condition for a JFR recording. You can also specify a duration or time period as part of the trigger condition, which means the conditional values must persist for the specified duration before the condition is met. The Cryostat agent supports smart triggers that continually listen to the value of the specified MBean counters. Triggering occurs if the current values of the specified counters match the configured values in the custom trigger for the specified duration. If triggering occurs, the Cryostat agent dynamically starts the JFR recording at that point. Note A JFR recording will not start dynamically if the custom trigger condition associated with this recording is not met.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/enabling_dynamic_jfr_recordings_based_on_mbean_custom_triggers/con_custom-triggers_cryostat
Chapter 3. Reports by CVEs
Chapter 3. Reports by CVEs You can create PDF reports showing a filtered list of CVEs your systems are exposed to. Give each report a relevant name, apply filters, and add user notes to present focused data to specific stakeholders. You can apply the following filters when setting up the PDF report: Security rules. Show only CVEs with the security rules label. Known exploit. Show only CVEs with the Known exploit label. Severity. Select one or more values: Critical, Important, Moderate, Low, or Unknown. CVSS base score. Select one or more ranges: All, 0.0-3.9, 4.0-7.9, 8.0-10.0, N/A (not applicable) Business risk. Select one or more values: High, Medium, Low, Not defined. Status. Select one or more values: Not reviewed, In review, On-hold, Scheduled for patch, Resolved, No action - risk accepted, Resolved via mitigation. Publish date. Select from All, Last 7 days, Last 30 days, Last 90 days, Last year, or More than 1 year ago. Applies to OS. Select the RHEL minor version(s) of systems to filter and view. Tags. Select groups of tagged systems. For more information about tags and system groups, see System tags and groups Advisory. Select whether to display only CVEs that have associated advisories (errata), only CVEs without advisories, or all CVEs. The CVE report lists the CVEs, linking each to the respective CVE page in the Red Hat CVE database so you can learn more about it. The list is ordered primarily by the publish date of the CVE, with the most recently published CVEs at the top of the list. Example of an Insights Vulnerability CVE report 3.1. Creating a PDF report of CVEs Use the following procedure to create a point-in-time snapshot of CVEs potentially affecting your systems. Prerequisites You must be logged in to Red Hat Hybrid Cloud Console . Procedure Navigate to the Security > Vulnerability > Reports page in the Insights for Red Hat Enterprise Linux application. On the Report by CVEs card, click Create report . Make selections as needed in the pop-up card: Optionally, customize the report title. Under Filter CVEs by , click each filter dropdown and select a value. Select Tags to only include systems in a tagged group of systems. Under CVE data to include, Choose columns is activated by default, allowing you to deselect columns you do not want to include. Leave all boxes checked, or click All columns to show everything. Optionally add notes to give the report context for the intended audience. Click Export report and allow the application a minute to generate the report. Select to open or save the PDF file, if your OS asks, and click OK .
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_vulnerability_service_reports/vuln-reports-cves
Chapter 88. Plugin schema reference
Chapter 88. Plugin schema reference Used in: Build Property Description name The unique name of the connector plugin. Will be used to generate the path where the connector artifacts will be stored. The name has to be unique within the KafkaConnect resource. The name has to follow the following pattern: ^[a-z][-_a-z0-9]*[a-z]USD . Required. string artifacts List of artifacts which belong to this connector plugin. Required. JarArtifact , TgzArtifact , ZipArtifact , MavenArtifact , OtherArtifact array
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-Plugin-reference
Chapter 5. Securing Apicurio Registry deployments
Chapter 5. Securing Apicurio Registry deployments Apicurio Registry provides authentication and authorization by using Red Hat Single Sign-On based on OpenID Connect (OIDC) and HTTP basic. You can configure the required settings automatically using the Red Hat Single Sign-On Operator, or manually configure them in Red Hat Single Sign-On and Apicurio Registry. Apicurio Registry also provides authentcation and authorization by using Microsoft Azure Active Directory based on OpenID Connect (OIDC) and the OAuth Authorization Code Flow. You can configure the required settings manually in Azure AD and Apicurio Registry. In addition to role-based authorization options with Red Hat Single Sign-On or Azure AD, Apicurio Registry also provides content-based authorization at the schema or API level, where only the artifact creator has write access. You can also configure an HTTPS connection to Apicurio Registry from inside or outside an OpenShift cluster. This chapter explains how to configure the following security options for your Apicurio Registry deployment on OpenShift: Section 5.1, "Securing Apicurio Registry using the Red Hat Single Sign-On Operator" Section 5.2, "Configuring Apicurio Registry authentication and authorization with Red Hat Single Sign-On" Section 5.3, "Configuring Apicurio Registry authentication and authorization with Microsoft Azure Active Directory" Section 5.4, "Apicurio Registry authentication and authorization configuration options" Section 5.5, "Configuring an HTTPS connection to Apicurio Registry from inside the OpenShift cluster" Section 5.6, "Configuring an HTTPS connection to Apicurio Registry from outside the OpenShift cluster" Additional resources For details on security configuration for Java client applications, see the following: Apicurio Registry Java client configuration Apicurio Registry serializer/deserializer configuration 5.1. Securing Apicurio Registry using the Red Hat Single Sign-On Operator The following procedure shows how to configure a Apicurio Registry REST API and web console to be protected by Red Hat Single Sign-On. Apicurio Registry supports the following user roles: Table 5.1. Apicurio Registry user roles Name Capabilities sr-admin Full access, no restrictions. sr-developer Create artifacts and configure artifact rules. Cannot modify global rules, perform import/export, or use /admin REST API endpoint. sr-readonly View and search only. Cannot modify artifacts or rules, perform import/export, or use /admin REST API endpoint. Note There is a related configuration option in the ApicurioRegistry CRD that you can use to set the web console to read-only mode. However, this configuration does not affect the REST API. Prerequisites You must have already installed the Apicurio Registry Operator. You must install the Red Hat Single Sign-On Operator or have Red Hat Single Sign-On accessible from your OpenShift cluster. Important The example configuration in this procedure is intended for development and testing only. To keep the procedure simple, it does not use HTTPS and other defenses recommended for a production environment. For more details, see the Red Hat Single Sign-On documentation. Procedure In the OpenShift web console, click Installed Operators and Red Hat Single Sign-On Operator , and then the Keycloak tab. Click Create Keycloak to provision a new Red Hat Single Sign-On instance for securing a Apicurio Registry deployment. You can use the default value, for example: apiVersion: keycloak.org/v1alpha1 kind: Keycloak metadata: name: example-keycloak labels: app: sso spec: instances: 1 externalAccess: enabled: True podDisruptionBudget: enabled: True Wait until the instance has been created, and click Networking and then Routes to access the new route for the keycloak instance. Click the Location URL and copy the displayed URL value for later use when deploying Apicurio Registry. Click Installed Operators and Red Hat Single Sign-On Operator , and click the Keycloak Realm tab, and then Create Keycloak Realm to create a registry example realm: apiVersion: keycloak.org/v1alpha1 kind: KeycloakRealm metadata: name: registry-keycloakrealm labels: app: sso spec: instanceSelector: matchLabels: app: sso realm: displayName: Registry enabled: true id: registry realm: registry sslRequired: none roles: realm: - name: sr-admin - name: sr-developer - name: sr-readonly clients: - clientId: registry-client-ui implicitFlowEnabled: true redirectUris: - '*' standardFlowEnabled: true webOrigins: - '*' publicClient: true - clientId: registry-client-api implicitFlowEnabled: true redirectUris: - '*' standardFlowEnabled: true webOrigins: - '*' publicClient: true users: - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-admin username: registry-admin - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-developer username: registry-developer - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-readonly username: registry-user Important You must customize this KeycloakRealm resource with values suitable for your environment if you are deploying to production. You can also create and manage realms using the Red Hat Single Sign-On web console. If your cluster does not have a valid HTTPS certificate configured, you can create the following HTTP Service and Ingress resources as a temporary workaround: Click Networking and then Services , and click Create Service using the following example: apiVersion: v1 kind: Service metadata: name: keycloak-http labels: app: keycloak spec: ports: - name: keycloak-http protocol: TCP port: 8080 targetPort: 8080 selector: app: keycloak component: keycloak type: ClusterIP sessionAffinity: None status: loadBalancer: {} Click Networking and then Ingresses , and click Create Ingress using the following example:: Modify the host value to create a route accessible for the Apicurio Registry user, and use it instead of the HTTPS route created by Red Hat Single Sign-On Operator. Click the Apicurio Registry Operator , and on the ApicurioRegistry tab, click Create ApicurioRegistry , using the following example, but replace your values in the keycloak section. apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-kafkasql-keycloak spec: configuration: security: keycloak: url: "http://keycloak-http-<namespace>.apps.<cluster host>" # ^ Required # Use an HTTP URL in development. realm: "registry" # apiClientId: "registry-client-api" # ^ Optional (default value) # uiClientId: "registry-client-ui" # ^ Optional (default value) persistence: 'kafkasql' kafkasql: bootstrapServers: '<my-cluster>-kafka-bootstrap.<my-namespace>.svc:9092' 5.2. Configuring Apicurio Registry authentication and authorization with Red Hat Single Sign-On This section explains how to manually configure authentication and authorization options for Apicurio Registry and Red Hat Single Sign-On. Note Alternatively, for details on how to configure these settings automatically, see Section 5.1, "Securing Apicurio Registry using the Red Hat Single Sign-On Operator" . The Apicurio Registry web console and core REST API support authentication in Red Hat Single Sign-On based on OAuth and OpenID Connect (OIDC). The same Red Hat Single Sign-On realm and users are federated across the Apicurio Registry web console and core REST API using OpenID Connect so that you only require one set of credentials. Apicurio Registry provides role-based authorization for default admin, write, and read-only user roles. Apicurio Registry provides content-based authorization at the schema or API level, where only the creator of the registry artifact can update or delete it. Apicurio Registry authentication and authorization settings are disabled by default. Prerequisites Red Hat Single Sign-On is installed and running. For more details, see the Red Hat Single Sign-On user documentation . Apicurio Registry is installed and running. Procedure In the Red Hat Single Sign-On Admin Console, create a Red Hat Single Sign-On realm for Apicurio Registry. By default, Apicurio Registry expects a realm name of registry . For details on creating realms, see the the Red Hat Single Sign-On user documentation . Create a Red Hat Single Sign-On client for the Apicurio Registry API. By default, Apicurio Registry expects the following settings: Client ID : registry-api Client Protocol : openid-connect Access Type : bearer-only You can use the defaults for the other client settings. Note If you are using Red Hat Single Sign-On service accounts, the client Access Type must be confidential instead of bearer-only . Create a Red Hat Single Sign-On client for the Apicurio Registry web console. By default, Apicurio Registry expects the following settings: Client ID : apicurio-registry Client Protocol : openid-connect Access Type : public Valid Redirect URLs : http://my-registry-url:8080/* Web Origins : + You can use the defaults for the other client settings. In your Apicurio Registry deployment on OpenShift, set the following Apicurio Registry environment variables to configure authentication using Red Hat Single Sign-On: Table 5.2. Configuration for Apicurio Registry authentication with Red Hat Single Sign-On Environment variable Description Type Default AUTH_ENABLED Enables authentication for Apicurio Registry. When set to true , the environment variables that follow are required for authentication using Red Hat Single Sign-On. String false KEYCLOAK_URL The URL of the Red Hat Single Sign-On authentication server. For example, http://localhost:8080 . String - KEYCLOAK_REALM The Red Hat Single Sign-On realm for authentication. For example, registry. String - KEYCLOAK_API_CLIENT_ID The client ID for the Apicurio Registry REST API. String registry-api KEYCLOAK_UI_CLIENT_ID The client ID for the Apicurio Registry web console. String apicurio-registry Tip For an example of setting environment variables on OpenShift, see Section 6.1, "Configuring Apicurio Registry health checks on OpenShift" . Set the following option to true to enable Apicurio Registry user roles in Red Hat Single Sign-On: Table 5.3. Configuration for Apicurio Registry role-based authorization Environment variable Java system property Type Default value ROLE_BASED_AUTHZ_ENABLED registry.auth.role-based-authorization Boolean false When Apicurio Registry user roles are enabled, you must assign Apicurio Registry users to at least one of the following default user roles in your Red Hat Single Sign-On realm: Table 5.4. Default user roles for registry authentication and authorization Role Read artifacts Write artifacts Global rules Summary sr-admin Yes Yes Yes Full access to all create, read, update, and delete operations. sr-developer Yes Yes No Access to create, read, update, and delete operations, except configuring global rules. This role can configure artifact-specific rules. sr-readonly Yes No No Access to read and search operations only. This role cannot configure any rules. Set the following to true to enable owner-only authorization for updates to schema and API artifacts in Apicurio Registry: Table 5.5. Configuration for owner-only authorization Environment variable Java system property Type Default value REGISTRY_AUTH_OBAC_ENABLED registry.auth.owner-only-authorization Boolean false Additional resources For details on configuring non-default user role names, see Section 5.4, "Apicurio Registry authentication and authorization configuration options" . For an open source example application and Keycloak realm, see Docker Compose example of Apicurio Registry with Keycloak . For details on how to use Red Hat Single Sign-On in a production environment, see the Red Hat Single Sign-On documentation . 5.3. Configuring Apicurio Registry authentication and authorization with Microsoft Azure Active Directory This section explains how to manually configure authentication and authorization options for Apicurio Registry and Microsoft Azure Active Directory (Azure AD). The Apicurio Registry web console and core REST API support authentication in Azure AD based on OpenID Connect (OIDC) and the OAuth Authorization Code Flow. Apicurio Registry provides role-based authorization for default admin, write, and read-only user roles. Apicurio Registry authentication and authorization settings are disabled by default. To secure Apicurio Registry with Azure AD, you require a valid directory in Azure AD with specific configuration. This involves registering the Apicurio Registry application in the Azure AD portal with recommended settings and configuring environment variables in Apicurio Registry. Prerequisites Azure AD is installed and running. For more details, see the Microsoft Azure AD user documentation . Apicurio Registry is installed and running. Procedure Log in to the Azure AD portal using your email address or GitHub account. In the navigation menu, select Manage > App registrations > New registration , and complete the following settings: Name : Enter your application name. For example: apicurio-registry-example Supported account types : Click Accounts in any organizational directory . Redirect URI : Select application from the list, and enter your Apicurio Registry web console application host. For example: https://test-registry.com/ui/ Important You must register your Apicurio Registry application host as a Redirect URI . When logging in, users are redirected from Apicurio Registry to Azure AD for authentication, and you want to send them back to your application afterwards. Azure AD does not allow any redirect URLs that are not registered. Click Register . You can view your app registration details by selecting Manage > App registrations > apicurio-registry-example . Select Manage > Authentication and ensure that the application is configured with your redirect URLs and tokens as follows: Redirect URIs : For example: https://test-registry.com/ui/ Implicit grant and hybrid flows : Click ID tokens (used for implicit and hybrid flows) Select Azure AD > Admin > App registrations > Your app > Application (client) ID . For example: 123456a7-b8c9-012d-e3f4-5fg67h8i901 Select Azure AD > Admin > App registrations > Your app > Directory (tenant) ID . For example: https://login.microsoftonline.com/1a2bc34d-567e-89f1-g0hi-1j2kl3m4no56/v2.0 In Apicurio Registry, configure the following environment variables with your Azure AD settings: Table 5.6. Configuration for Azure AD settings in Apicurio Registry Environment variable Description Setting KEYCLOAK_API_CLIENT_ID The client application ID for the Apicurio Registry REST API Your Azure AD Application (client) ID obtained in step 5. For example: 123456a7-b8c9-012d-e3f4-5fg67h8i901 REGISTRY_OIDC_UI_CLIENT_ID The client application ID for the Apicurio Registry web console. Your Azure AD Application (client) ID obtained in step 5. For example: 123456a7-b8c9-012d-e3f4-5fg67h8i901 REGISTRY_AUTH_URL_CONFIGURED The URL for authentication in Azure AD. Your Azure AD Application (tenant) ID obtained in step 6. For example: https://login.microsoftonline.com/1a2bc34d-567e-89f1-g0hi-1j2kl3m4no56/v2.0 . In Apicurio Registry, configure the following environment variables for Apicurio Registry-specific settings: Table 5.7. Configuration for Apicurio Registry-specific settings Environment variable Description Setting REGISTRY_AUTH_ENABLED Enables authentication for Apicurio Registry. true REGISTRY_UI_AUTH_TYPE The Apicurio Registry authentication type. oidc CORS_ALLOWED_ORIGINS The host for your Apicurio Registry deployment for cross-origin resource sharing (CORS). For example: https://test-registry.com REGISTRY_OIDC_UI_REDIRECT_URL The host for your Apicurio Registry web console. For example: https://test-registry.com/ui ROLE_BASED_AUTHZ_ENABLED Enables role-based authorization in Apicurio Registry. true QUARKUS_OIDC_ROLES_ROLE_CLAIM_PATH The name of the claim in which Azure AD stores roles. roles Note When you enable roles in Apicurio Registry, you must also create the same roles in Azure AD as application roles. The default roles expected by Apicurio Registry are sr-admin , sr-developer , and sr-readonly . Additional resources For details on configuring non-default user role names, see Section 5.4, "Apicurio Registry authentication and authorization configuration options" . For more details on using Azure AD, see the Microsoft Azure AD user documentation . 5.4. Apicurio Registry authentication and authorization configuration options Apicurio Registry provides authentication options for OpenID Connect with Red Hat Single Sign-On and HTTP basic authentication. Apicurio Registry provides authorization options for role-based and content-based approaches: Role-based authorization for default admin, write, and read-only user roles. Content-based authorization for schema or API artifacts, where only the owner of the artifacts or artifact group can update or delete artifacts. Important All authentication and authorization options in Apicurio Registry are disabled by default. Before enabling any of these options, you must first set the AUTH_ENABLED option to true . This chapter provides details on the following configuration options: Apicurio Registry authentication by using OpenID Connect with Red Hat Single Sign-On Apicurio Registry authentication by using HTTP basic Apicurio Registry role-based authorization Apicurio Registry owner-only authorization Apicurio Registry authenticated read access Apicurio Registry anonymous read-only access Apicurio Registry authentication by using OpenID Connect with Red Hat Single Sign-On You can set the following environment variables to configure authentication for the Apicurio Registry web console and API with Red Hat Single Sign-On: Table 5.8. Configuration for Apicurio Registry authentication with Red Hat Single Sign-On Environment variable Description Type Default AUTH_ENABLED Enables authentication for Apicurio Registry. When set to true , the environment variables that follow are required for authentication using Red Hat Single Sign-On. String false KEYCLOAK_URL The URL of the Red Hat Single Sign-On authentication server. For example, http://localhost:8080 . String - KEYCLOAK_REALM The Red Hat Single Sign-On realm for authentication. For example, registry. String - KEYCLOAK_API_CLIENT_ID The client ID for the Apicurio Registry REST API. String registry-api KEYCLOAK_UI_CLIENT_ID The client ID for the Apicurio Registry web console. String apicurio-registry Apicurio Registry authentication by using HTTP basic By default, Apicurio Registry supports authentication by using OpenID Connect. Users or API clients must obtain an access token to make authenticated calls to the Apicurio Registry REST API. However, because some tools do not support OpenID Connect, you can also configure Apicurio Registry to support HTTP basic authentication by setting the following configuration options to true : Table 5.9. Configuration for Apicurio Registry HTTP basic authentication Environment variable Java system property Type Default value AUTH_ENABLED registry.auth.enabled Boolean false CLIENT_CREDENTIALS_BASIC_AUTH_ENABLED registry.auth.basic-auth-client-credentials.enabled Boolean false Apicurio Registry HTTP basic client credentials cache expiry You can also configure the HTTP basic client credentials cache expiry time. By default, when using HTTP basic authentication, Apicurio Registry caches JWT tokens, and does not issue a new token when there is no need. You can configure the cache expiry time for JWT tokens, which is set to 10 mins by default. When using Red Hat Single Sign-On, it is best to set this configuration to your Red Hat Single Sign-On JWT expiry time minus one minute. For example, if you have the expiry time set to 5 mins in Red Hat Single Sign-On, you should set the following configuration option to 4 mins: Table 5.10. Configuration for HTTP basic client credentials cache expiry Environment variable Java system property Type Default value CLIENT_CREDENTIALS_BASIC_CACHE_EXPIRATION registry.auth.basic-auth-client-credentials.cache-expiration Integer 10 Apicurio Registry role-based authorization You can set the following options to true to enable role-based authorization in Apicurio Registry: Table 5.11. Configuration for Apicurio Registry role-based authorization Environment variable Java system property Type Default value AUTH_ENABLED registry.auth.enabled Boolean false ROLE_BASED_AUTHZ_ENABLED registry.auth.role-based-authorization Boolean false You can then configure role-based authorization to use roles included in the user's authentication token (for example, granted when authenticating by using Red Hat Single Sign-On), or to use role mappings managed internally by Apicurio Registry. Use roles assigned in Red Hat Single Sign-On To enable using roles assigned by Red Hat Single Sign-On, set the following environment variables: Table 5.12. Configuration for Apicurio Registry role-based authorization by using Red Hat Single Sign-On Environment variable Description Type Default ROLE_BASED_AUTHZ_SOURCE When set to token , user roles are taken from the authentication token. String token REGISTRY_AUTH_ROLES_ADMIN The name of the role that indicates a user is an admin. String sr-admin REGISTRY_AUTH_ROLES_DEVELOPER The name of the role that indicates a user is a developer. String sr-developer REGISTRY_AUTH_ROLES_READONLY The name of the role that indicates a user has read-only access. String sr-readonly When Apicurio Registry is configured to use roles from Red Hat Single Sign-On, you must assign Apicurio Registry users to at least one of the following user roles in Red Hat Single Sign-On. However, you can configure different user role names by using the environment variables in Table 5.12, "Configuration for Apicurio Registry role-based authorization by using Red Hat Single Sign-On" . Table 5.13. Apicurio Registry roles for authentication and authorization Role name Read artifacts Write artifacts Global rules Description sr-admin Yes Yes Yes Full access to all create, read, update, and delete operations. sr-developer Yes Yes No Access to create, read, update, and delete operations, except configuring global rules and import/export. This role can configure artifact-specific rules only. sr-readonly Yes No No Access to read and search operations only. This role cannot configure any rules. Manage roles directly in Apicurio Registry To enable using roles managed internally by Apicurio Registry, set the following environment variable: Table 5.14. Configuration for Apicurio Registry role-based authorization by using internal role mappings Environment variable Description Type Default ROLE_BASED_AUTHZ_SOURCE When set to application , user roles are managed internally by Apicurio Registry. String token When using internally managed role mappings, users can be assigned a role by using the /admin/roleMappings endpoint in the Apicurio Registry REST API. For more details, see Apicurio Registry REST API documentation . Users can be granted exactly one role: ADMIN , DEVELOPER , or READ_ONLY . Only users with admin privileges can grant access to other users. Apicurio Registry admin-override configuration Because there are no default admin users in Apicurio Registry, it is usually helpful to configure another way for users to be identified as admins. You can configure this admin-override feature by using the following environment variables: Table 5.15. Configuration for Apicurio Registry admin-override Environment variable Description Type Default REGISTRY_AUTH_ADMIN_OVERRIDE_ENABLED Enables the admin-override feature. String false REGISTRY_AUTH_ADMIN_OVERRIDE_FROM Where to look for admin-override information. Only token is currently supported. String token REGISTRY_AUTH_ADMIN_OVERRIDE_TYPE The type of information used to determine if a user is an admin. Values depend on the value of the FROM variable, for example, role or claim when FROM is token . String role REGISTRY_AUTH_ADMIN_OVERRIDE_ROLE The name of the role that indicates a user is an admin. String sr-admin REGISTRY_AUTH_ADMIN_OVERRIDE_CLAIM The name of a JWT token claim to use for determining admin-override. String org-admin REGISTRY_AUTH_ADMIN_OVERRIDE_CLAIM_VALUE The value that the JWT token claim indicated by the CLAIM variable must be for the user to be granted admin-override. String true For example, you can use this admin-override feature to assign the sr-admin role to a single user in Red Hat Single Sign-On, which grants that user the admin role. That user can then use the /admin/roleMappings REST API (or associated UI) to grant roles to additional users (including additional admins). Apicurio Registry owner-only authorization You can set the following options to true to enable owner-only authorization for updates to artifacts or artifact groups in Apicurio Registry: Table 5.16. Configuration for owner-only authorization Environment variable Java system property Type Default value AUTH_ENABLED registry.auth.enabled Boolean false REGISTRY_AUTH_OBAC_ENABLED registry.auth.owner-only-authorization Boolean false REGISTRY_AUTH_OBAC_LIMIT_GROUP_ACCESS registry.auth.owner-only-authorization.limit-group-access Boolean false When owner-only authorization is enabled, only the user who created an artifact can modify or delete that artifact. When owner-only authorization and group owner-only authorization are both enabled, only the user who created an artifact group has write access to that artifact group, for example, to add or remove artifacts in that group. Apicurio Registry authenticated read access When the authenticated read access option is enabled, Apicurio Registry grants at least read-only access to requests from any authenticated user in the same organization, regardless of their user role. To enable authenticated read access, you must first enable role-based authorization, and then ensure that the following options are set to true : Table 5.17. Configuration for authenticated read access Environment variable Java system property Type Default value AUTH_ENABLED registry.auth.enabled Boolean false REGISTRY_AUTH_AUTHENTICATED_READS_ENABLED registry.auth.authenticated-read-access.enabled Boolean false For more details, see the section called "Apicurio Registry role-based authorization" . Apicurio Registry anonymous read-only access In addition to the two main types of authorization (role-based and owner-based authorization), Apicurio Registry supports an anonymous read-only access option. To allow anonymous users, such as REST API calls with no authentication credentials, to make read-only calls to the REST API, set the following options to true : Table 5.18. Configuration for anonymous read-only access Environment variable Java system property Type Default value AUTH_ENABLED registry.auth.enabled Boolean false REGISTRY_AUTH_ANONYMOUS_READ_ACCESS_ENABLED registry.auth.anonymous-read-access.enabled Boolean false Additional resources For an example of how to set environment variables in your Apicurio Registry deployment on OpenShift, see Section 6.3, "Managing Apicurio Registry environment variables" For details on configuring custom authentication for Apicurio Registry, the see Quarkus Open ID Connect documentation 5.5. Configuring an HTTPS connection to Apicurio Registry from inside the OpenShift cluster The following procedure shows how to configure Apicurio Registry deployment to expose a port for HTTPS connections from inside the OpenShift cluster. Warning This kind of connection is not directly available outside of the cluster. Routing is based on hostname, which is encoded in the case of an HTTPS connection. Therefore, edge termination or other configuration is still needed. See Section 5.6, "Configuring an HTTPS connection to Apicurio Registry from outside the OpenShift cluster" . Prerequisites You must have already installed the Apicurio Registry Operator. Procedure Generate a keystore with a self-signed certificate. You can skip this step if you are using your own certificates. openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout tls.key -out tls.crt Create a new secret to hold the certificate and the private key. In the left navigation menu of the OpenShift web console, click Workloads > Secrets > Create Key/Value Secret . Use the following values: Name: https-cert-secret Key 1: tls.key Value 1: tls.key (uploaded file) Key 2: tls.crt Value 2: tls.crt (uploaded file) or create the secret using the following command: oc create secret generic https-cert-secret --from-file=tls.key --from-file=tls.crt Edit the spec.configuration.security.https section of the ApicurioRegistry CR for your Apicurio Registry deployment, for example: apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: configuration: # ... security: https: secretName: https-cert-secret Verify that the connection is working: Connect into a pod on the cluster using SSH (you can use the Apicurio Registry pod): oc rsh example-apicurioregistry-deployment-6f788db977-2wzpw Find the cluster IP of the Apicurio Registry pod from the Service resource (see the Location column in the web console). Afterwards, execute a test request (we are using self-signed certificate, so an insecure flag is required): curl -k https://172.30.230.78:8443/health Note In the Kubernetes secret containing the HTTPS certificate and key, the names tls.crt and tls.key must be used for the provided values. This is currently not configurable. Disabling HTTP If you enabled HTTPS using the procedure in this section, you can also disable the default HTTP connection by setting the spec.security.https.disableHttp to true . This removes the HTTP port 8080 from the Apicurio Registry pod container, Service , and the NetworkPolicy (if present). Importantly, Ingress is also removed because the Apicurio Registry Operator currently does not support configuring HTTPS in Ingress . Users must create an Ingress for HTTPS connections manually. Additional resources How to enable HTTPS and SSL termination in a Quarkus app 5.6. Configuring an HTTPS connection to Apicurio Registry from outside the OpenShift cluster The following procedure shows how to configure Apicurio Registry deployment to expose an HTTPS edge-terminated route for connections from outside the OpenShift cluster. Prerequisites You must have already installed the Apicurio Registry Operator. Read the OpenShift documentation for creating secured routes . Procedure Add a second Route in addition to the HTTP route created by the Apicurio Registry Operator. For example: kind: Route apiVersion: route.openshift.io/v1 metadata: [...] labels: app: example-apicurioregistry [...] spec: host: example-apicurioregistry-default.apps.example.com to: kind: Service name: example-apicurioregistry-service-9whd7 weight: 100 port: targetPort: 8080 tls: termination: edge insecureEdgeTerminationPolicy: Redirect wildcardPolicy: None Note Make sure the insecureEdgeTerminationPolicy: Redirect configuration property is set. If you do not specify a certificate, OpenShift will use a default. Alternatively, you can generate a custom self-signed certificate using the following commands: openssl genrsa 2048 > tls.key && openssl req -new -x509 -nodes -sha256 -days 365 -key tls.key -out tls.crt Then create a route using the OpenShift CLI: oc create route edge \ --service=example-apicurioregistry-service-9whd7 \ --cert=tls.crt --key=tls.key \ --hostname=example-apicurioregistry-default.apps.example.com \ --insecure-policy=Redirect \ -n default
[ "apiVersion: keycloak.org/v1alpha1 kind: Keycloak metadata: name: example-keycloak labels: app: sso spec: instances: 1 externalAccess: enabled: True podDisruptionBudget: enabled: True", "apiVersion: keycloak.org/v1alpha1 kind: KeycloakRealm metadata: name: registry-keycloakrealm labels: app: sso spec: instanceSelector: matchLabels: app: sso realm: displayName: Registry enabled: true id: registry realm: registry sslRequired: none roles: realm: - name: sr-admin - name: sr-developer - name: sr-readonly clients: - clientId: registry-client-ui implicitFlowEnabled: true redirectUris: - '*' standardFlowEnabled: true webOrigins: - '*' publicClient: true - clientId: registry-client-api implicitFlowEnabled: true redirectUris: - '*' standardFlowEnabled: true webOrigins: - '*' publicClient: true users: - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-admin username: registry-admin - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-developer username: registry-developer - credentials: - temporary: false type: password value: changeme enabled: true realmRoles: - sr-readonly username: registry-user", "apiVersion: v1 kind: Service metadata: name: keycloak-http labels: app: keycloak spec: ports: - name: keycloak-http protocol: TCP port: 8080 targetPort: 8080 selector: app: keycloak component: keycloak type: ClusterIP sessionAffinity: None status: loadBalancer: {}", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: keycloak-http labels: app: keycloak spec: rules: - host: KEYCLOAK_HTTP_HOST http: paths: - path: / pathType: ImplementationSpecific backend: service: name: keycloak-http port: number: 8080", "apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry-kafkasql-keycloak spec: configuration: security: keycloak: url: \"http://keycloak-http-<namespace>.apps.<cluster host>\" # ^ Required # Use an HTTP URL in development. realm: \"registry\" # apiClientId: \"registry-client-api\" # ^ Optional (default value) # uiClientId: \"registry-client-ui\" # ^ Optional (default value) persistence: 'kafkasql' kafkasql: bootstrapServers: '<my-cluster>-kafka-bootstrap.<my-namespace>.svc:9092'", "openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout tls.key -out tls.crt", "create secret generic https-cert-secret --from-file=tls.key --from-file=tls.crt", "apiVersion: registry.apicur.io/v1 kind: ApicurioRegistry metadata: name: example-apicurioregistry spec: configuration: # security: https: secretName: https-cert-secret", "rsh example-apicurioregistry-deployment-6f788db977-2wzpw", "curl -k https://172.30.230.78:8443/health", "kind: Route apiVersion: route.openshift.io/v1 metadata: [...] labels: app: example-apicurioregistry [...] spec: host: example-apicurioregistry-default.apps.example.com to: kind: Service name: example-apicurioregistry-service-9whd7 weight: 100 port: targetPort: 8080 tls: termination: edge insecureEdgeTerminationPolicy: Redirect wildcardPolicy: None", "openssl genrsa 2048 > tls.key && openssl req -new -x509 -nodes -sha256 -days 365 -key tls.key -out tls.crt", "create route edge --service=example-apicurioregistry-service-9whd7 --cert=tls.crt --key=tls.key --hostname=example-apicurioregistry-default.apps.example.com --insecure-policy=Redirect -n default" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apicurio_registry/2.6/html/installing_and_deploying_apicurio_registry_on_openshift/securing-the-registry
Chapter 5. Using Insights tasks to help you convert from CentOS Linux 7 to RHEL 7
Chapter 5. Using Insights tasks to help you convert from CentOS Linux 7 to RHEL 7 You can use Red Hat Insights to help you convert from CentOS Linux 7 to RHEL 7. For more information about using Insights tasks to help convert your systems, see Converting using Insights in the Converting from a Linux distribution to RHEL using the Convert2RHEL utility documentation . Additional resources Video: Pre-conversion analysis for converting to Red Hat Enterprise Linux Video: Convert to Red Hat Enterprise Linux from CentOS7 Linux using Red Hat Insights Troubleshooting conversion-related Insights tasks Tasks help you update, manage, or secure your Red Hat Enterprise Linux infrastructure using Insights. Each task is a predefined playbook that executes a task from start to finish. If you have trouble completing some Insights conversion-related tasks, see: Troubleshooting issues with Red Hat Insights conversions
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_remediating_system_issues_using_red_hat_insights_tasks/convert-to-centos-using-tasks_overview-tasks
Chapter 6. GenericKafkaListener schema reference
Chapter 6. GenericKafkaListener schema reference Used in: KafkaClusterSpec Full list of GenericKafkaListener schema properties Configures listeners to connect to Kafka brokers within and outside OpenShift. Configure Kafka broker listeners using the listeners property in the Kafka resource. Listeners are defined as an array. Example Kafka resource showing listener configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: #... listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #... The name and port must be unique within the Kafka cluster. By specifying a unique name and port for each listener, you can configure multiple listeners. The name can be up to 25 characters long, comprising lower-case letters and numbers. 6.1. Specifying a port number The port number is the port used in the Kafka cluster, which might not be the same port used for access by a client. loadbalancer listeners use the specified port number, as do internal and cluster-ip listeners ingress and route listeners use port 443 for access nodeport listeners use the port number assigned by OpenShift For client connection, use the address and port for the bootstrap service of the listener. You can retrieve this from the status of the Kafka resource. Example command to retrieve the address and port for client connection oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name=="<listener_name>")].bootstrapServers}{"\n"}' Important When configuring listeners for client access to brokers, you can use port 9092 or higher (9093, 9094, and so on), but with a few exceptions. The listeners cannot be configured to use the ports reserved for interbroker communication (9090 and 9091), Prometheus metrics (9404), and JMX (Java Management Extensions) monitoring (9999). 6.2. Specifying listener types Set the type to internal for internal listeners. For external listeners, choose from route , loadbalancer , nodeport , or ingress . You can also configure a cluster-ip listener, which is an internal type used for building custom access mechanisms. internal You can configure internal listeners with or without encryption using the tls property. Example internal listener configuration #... spec: kafka: #... listeners: #... - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #... route Configures an external listener to expose Kafka using OpenShift Routes and the HAProxy router. A dedicated Route is created for every Kafka broker pod. An additional Route is created to serve as a Kafka bootstrap address. Kafka clients can use these Routes to connect to Kafka on port 443. The client connects on port 443, the default router port, but traffic is then routed to the port you configure, which is 9094 in this example. Example route listener configuration #... spec: kafka: #... listeners: #... - name: external1 port: 9094 type: route tls: true #... ingress Configures an external listener to expose Kafka using Kubernetes Ingress and the Ingress NGINX Controller for Kubernetes . A dedicated Ingress resource is created for every Kafka broker pod. An additional Ingress resource is created to serve as a Kafka bootstrap address. Kafka clients can use these Ingress resources to connect to Kafka on port 443. The client connects on port 443, the default controller port, but traffic is then routed to the port you configure, which is 9095 in the following example. You must specify the hostname used by the bootstrap service using GenericKafkaListenerConfigurationBootstrap property. And you must also specify the hostnames used by the per-broker services using GenericKafkaListenerConfigurationBroker or hostTemplate properties. With the hostTemplate property, you don't need to specify the configuration for every broker. Example ingress listener configuration #... spec: kafka: #... listeners: #... - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: hostTemplate: broker-{nodeId}.myingress.com bootstrap: host: bootstrap.myingress.com #... Note External listeners using Ingress are currently only tested with the Ingress NGINX Controller for Kubernetes . loadbalancer Configures an external listener to expose Kafka using a Loadbalancer type Service . A new loadbalancer service is created for every Kafka broker pod. An additional loadbalancer is created to serve as a Kafka bootstrap address. Loadbalancers listen to the specified port number, which is port 9094 in the following example. You can use the loadBalancerSourceRanges property to configure source ranges to restrict access to the specified IP addresses. Example loadbalancer listener configuration #... spec: kafka: #... listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #... nodeport Configures an external listener to expose Kafka using a NodePort type Service . Kafka clients connect directly to the nodes of OpenShift. An additional NodePort type of service is created to serve as a Kafka bootstrap address. When configuring the advertised addresses for the Kafka broker pods, Streams for Apache Kafka uses the address of the node on which the given pod is running. You can use preferredNodePortAddressType property to configure the first address type checked as the node address . Example nodeport listener configuration #... spec: kafka: #... listeners: #... - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #... Note TLS hostname verification is not currently supported when exposing Kafka clusters using node ports. cluster-ip Configures an internal listener to expose Kafka using a per-broker ClusterIP type Service . The listener does not use a headless service and its DNS names to route traffic to Kafka brokers. You can use this type of listener to expose a Kafka cluster when using the headless service is unsuitable. You might use it with a custom access mechanism, such as one that uses a specific Ingress controller or the OpenShift Gateway API. A new ClusterIP service is created for each Kafka broker pod. The service is assigned a ClusterIP address to serve as a Kafka bootstrap address with a per-broker port number. For example, you can configure the listener to expose a Kafka cluster over an Nginx Ingress Controller with TCP port configuration. Example cluster-ip listener configuration #... spec: kafka: #... listeners: - name: clusterip type: cluster-ip tls: false port: 9096 #... 6.3. Configuring network policies to restrict listener access Use networkPolicyPeers to configure network policies that restrict access to a listener at the network level. The following example shows a networkPolicyPeers configuration for a plain and a tls listener. In the following example: Only application pods matching the labels app: kafka-sasl-consumer and app: kafka-sasl-producer can connect to the plain listener. The application pods must be running in the same namespace as the Kafka broker. Only application pods running in namespaces matching the labels project: myproject and project: myproject2 can connect to the tls listener. The syntax of the networkPolicyPeers property is the same as the from property in NetworkPolicy resources. Example network policy configuration listeners: #... - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2 # ... 6.4. GenericKafkaListener schema properties Property Property type Description name string Name of the listener. The name will be used to identify the listener and the related OpenShift objects. The name has to be unique within given a Kafka cluster. The name can consist of lowercase characters and numbers and be up to 11 characters long. port integer Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients. type string (one of [ingress, internal, route, loadbalancer, cluster-ip, nodeport]) Type of the listener. The supported types are as follows: internal type exposes Kafka internally only within the OpenShift cluster. route type uses OpenShift Routes to expose Kafka. loadbalancer type uses LoadBalancer type services to expose Kafka. nodeport type uses NodePort type services to expose Kafka. ingress type uses OpenShift Nginx Ingress to expose Kafka with TLS passthrough. cluster-ip type uses a per-broker ClusterIP service. tls boolean Enables TLS encryption on the listener. This is a required property. For route and ingress type listeners, TLS encryption must be always enabled. authentication KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth , KafkaListenerAuthenticationCustom Authentication configuration for this listener. configuration GenericKafkaListenerConfiguration Additional listener configuration. networkPolicyPeers NetworkPolicyPeer array List of peers which should be able to connect to this listener. Peers in this list are combined using a logical OR operation. If this field is empty or missing, all connections will be allowed for this listener. If this field is present and contains at least one item, the listener only allows the traffic which matches at least one item in this list.
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #", "get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==\"<listener_name>\")].bootstrapServers}{\"\\n\"}'", "# spec: kafka: # listeners: # - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #", "# spec: kafka: # listeners: # - name: external1 port: 9094 type: route tls: true #", "# spec: kafka: # listeners: # - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: hostTemplate: broker-{nodeId}.myingress.com bootstrap: host: bootstrap.myingress.com #", "# spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #", "# spec: kafka: # listeners: # - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #", "# spec: kafka: # listeners: - name: clusterip type: cluster-ip tls: false port: 9096 #", "listeners: # - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-generickafkalistener-reference
Chapter 2. Cryostat migration process
Chapter 2. Cryostat migration process The process of migrating to Cryostat 3.0 includes the following tasks: Familiarize with your current Cryostat installation and configured instances . Upgrade your existing Cryostat installation . Update your deployed applications . Restore configurations . 2.1. Familiarizing with your current Cryostat installation and configured instances The Cryostat Operator is normally updated automatically if the Update approval strategy is set as Automatic . However, because Cryostat 3.0 introduces changes to the installation mode and provided APIs as described in Major Cryostat Operator changes , you might need to perform manual actions to ensure that Cryostat is properly updated. To determine whether manual intervention is required, review the following steps to familiarize with your current Cryostat installation and configured instances. Procedure As an administrator, in the main navigation of the Red Hat OpenShift web console, click the drop-down menu and select Administrator . In the navigation menu, click Operators > Installed Operators . Search for the installed Red Hat build of Cryostat Operator. To check Cryostat's installation mode, note the value in the Managed Namespaces column. This value can be All Namespaces or a user-defined namespace. Figure 2.1. Managed Namespaces details in Cryostat In the Name column, click Red Hat build of Cryostat . In the Operator details panel, click the Subscription tab. In the Subscription details panel, to check Cryostat's update approval strategy, note the value in the Update approval area. This value can be Manual or Automatic . Figure 2.2. Update approval details in Cryostat To check your configured instances: Click the Cluster Cryostat tab. This displays a list of the ClusterCryostat instances that have already been created. If there are no instances, the page diplays No operands found . Click the Cryostat tab. This displays a list of the Cryostat instances that have already been created. If there are no instances, the page diplays No operands found . Once you have identified the combination of installation mode and configured instances, you can upgrade Cryostat by completing the appropriate steps in Upgrading your Cryostat installation that match your deployment. 2.2. Upgrading your Cryostat installation The steps you must follow to upgrade Cryostat will vary depending on which of the following scenarios matches your existing Cryostat installation, based on Cryostat installation and instance combinations: Cryostat was installed to All Namespaces and Cryostat instances were previously created . Cryostat was installed to All Namespaces and ClusterCryostat instances were previously created . Cryostat was installed to All Namespaces and ClusterCryostat and Cryostat instances were previously created . Cryostat was installed to a target namespace and Cryostat instances were previously created . Cryostat was installed to a target namespace and ClusterCryostat instances were previously created . Cryostat was installed to a target namespace and ClusterCryostat and Cryostat instances were previously created . 2.2.1. Upgrading installations where Cryostat was installed to "All Namespaces" and "Cryostat" instances were previously created Note For this scenario, if the Update approval strategy is set to Automatic , no upgrade steps are required. For this scenario, if the Update approval strategy is set to Manual , complete the steps in the following procedure. Procedure As an administrator, in the main navigation of the Red Hat OpenShift web console, click the drop-down menu and select Administrator . In the navigation menu, click Operators > Installed Operators . Search for the installed Red Hat build of Cryostat Operator. Check the version and status of the Cryostat Operator. The Cryostat version will be 2.4 and the Status column should display Upgrade available . Click Upgrade available . Click Preview InstallPlan and then click Approve . You have now successfully upgraded to Cryostat 3.0. Note To ensure that cached sessions are reset, a hard browser refresh might be required when opening the Cryostat web application. 2.2.2. Upgrading installations where Cryostat was installed to "All Namespaces" and "ClusterCryostat" instances were previously created For this scenario, complete the steps in the following procedure: Procedure As an administrator, in the main navigation of the Red Hat OpenShift web console, click the drop-down menu and select Administrator . In the navigation menu, click Operators > Installed Operators . Search for the installed Red Hat build of Cryostat Operator. Check the version and status of the Cryostat Operator. If the Update approval strategy for Cryostat is Automatic , the Cryostat version will have already been upgraded to 3.0. Continue to Step 5 . If the Update approval Strategy for Cryostat is Manual , the Cryostat version will be 2.4 and the Status column should display Upgrade available . Click Upgrade available . Click Preview InstallPlan and then click Approve . Once Cryostat 3.0 is installed, navigate to Administration > CustomResourceDefinitions . Search for and click into ClusterCryostat CRD (custom resource definition). Click the Instances tab. For each ClusterCryostat instance that you need to migrate, make a copy of the YAML configuration: Click the name of the ClusterCryostat instance. Then click the YAML tab and click Download in the lower-right corner of the panel. Repeat step 8a for each instance that you need to migrate. Navigate back to the ClusterCryostat Instances tab. Remove each ClusterCryostat instance by clicking the ellipsis icon on the right. This includes the instances that you downloaded in Step 8 . Navigate to the Installed Operators > Red Hat build of Cryostat > Cryostat tab. For each ClusterCryostat instance in Step 8 : Click Create Cryostat . Use the YAML view to modify your ClusterCryostat configurations. For example: Old YAML settings: New YAML settings: Click Create . You have now successfully migrated your Cryostat instances. For more information about updating your deployed applications, see Updating your deployed applications . 2.2.3. Upgrading installations where Cryostat was installed to "All Namespaces" and "ClusterCryostat" and "Cryostat" instances were previously created For information about migrating ClusterCryostat instances, see Upgrading installations where Cryostat was installed to "All Namespaces" and "ClusterCryostat" instances were previously created . Note Cryostat instances do not need to be modified or removed. 2.2.4. Upgrading installations where Cryostat was installed to a target namespace and "Cryostat" instances were previously created For this scenario, regardless of whether the Update approval strategy is Automatic or Manual , the upgrade paths for Cryostat Operators will fail with the following error: error: OwnNamespace InstallModeType not supported, cannot configure to watch own namespace Before you continue, review the information in Migration recommendations . For this scenario, complete the steps in the following procedure. Procedure As an administrator, in the main navigation of the Red Hat OpenShift web console, click the drop-down menu and select Administrator . In the navigation menu, click Operators > Installed Operators . Note Ensure that you have selected the correct project (namespace) in which Cryostat was installed. Search for the installed Red Hat build of Cryostat version 2.4 Operator. Uninstall Red Hat build of Cryostat from Operators > Installed Operators . If the Update approval strategy is Manual , only Cryostat 2.4 is displayed in the Installed Operators table. Click the ellipsis icon on the right and select Uninstall Operator . If the Update approval Strategy is Automatic , Cryostat 2.4 and 3.0 are both displayed in the Installed Operators table. For Cryostat 2.4, click the ellipsis icon and select Delete ClusterServiceVersion . For Cryostat 3.0, click the ellipsis icon and select Uninstall Operator . Navigate to Operators > OperatorHub and search for Cryostat. Click the Red Hat build of Cryostat tile and install version 3.0 into the openshift-operators namespace. Openshift-operators is the default namespace for Operator installations to "All Namespaces". You have now successfully migrated to the Cryostat 3.0 Operator. For more information about updating your deployed applications, see Updating your deployed applications . Note To ensure that cached sessions are reset, a hard browser refresh might be required when opening the Cryostat web application. 2.2.5. Upgrading installations where Cryostat was installed to a target namespace and "ClusterCryostat" instances were previously created For this scenario, regardless of whether the Update approval strategy is Automatic or Manual , the upgrade paths for Cryostat Operators will fail with the following error: error: OwnNamespace InstallModeType not supported, cannot configure to watch own namespace Before you continue, review the information in Migration recommendations . For this scenario, complete the steps in the following procedure. Procedure As an administrator, in the main navigation of the Red Hat OpenShift web console, click the drop-down menu and select Administrator . In the navigation menu, click Operators > Installed Operators . Note Ensure that you have selected the correct project (namespace) in which Cryostat was installed. Search for the installed Red Hat build of Cryostat version 2.4 Operator. Click the Cluster Cryostat tab. To export an instance that needs to be migrated, click this instance name. Then click the YAML tab and click Download in the lower-right corner of the panel. Repeat step 4a for each instance that you need to migrate. Once the instances have been exported, to delete these instances, select Delete ClusterCryostat from the Actions drop-down menu in the upper-right corner of the panel. Uninstall Red Hat build of Cryostat from Operators > Installed Operators . If the Update approval strategy is Manual , only Cryostat 2.4 is displayed in the Installed Operators table. Click the ellipsis icon on the right and select Uninstall Operator . If the Update approval Strategy is Automatic , Cryostat 2.4 and 3.0 are both displayed in the Installed Operators table. For Cryostat 2.4, click the ellipsis icon and select Delete ClusterServiceVersion . For Cryostat 3.0, click the ellipsis icon and select Uninstall Operator . Navigate to Operators > OperatorHub and search for Cryostat. Click the Red Hat build of Cryostat tile and install version 3.0 into the openshift-operators namespace. Openshift-operators is the default namespace for Operator installations to "All Namespaces". For each ClusterCryostat instance that you exported in Step 4 : Click Create Cryostat . Using the YAML from Step 4 , use the YAML view to modify your ClusterCryostat configurations. For example: Old YAML settings: New YAML settings: Click Create . You have now successfully migrated to the Cryostat 3.0 Operator. For more information about updating your deployed applications, see Updating your deployed applications . 2.2.6. Upgrading installations where Cryostat was installed to a target namespace and "ClusterCryostat" and "Cryostat" instances were previously created For information about migrating ClusterCryostat instances, see Upgrading installations where Cryostat was installed to a target namespace and ClusterCryostat instances were previously created . Note Cryostat instances do not need to be modified or removed. 2.3. Updating your deployed applications For more information about configuring your Java applications in Cryostat 3.0, see Configuring Java applications . 2.4. Restoring configurations For previously customized Cryostat configurations such as event templates, dashboard layouts, JFR recordings, and automated rules, review the following guides to restore these customizations: Event templates For more information, see Step 7 in Using custom event templates . Dashboard layouts For more information, see Restoring a dashboard layout . JFR recordings For more information, see Uploading a JFR recording to the Cryostat archives location . Automated rules For more information, see Uploading an automated rule in JSON . Revised on 2024-07-02 14:48:24 UTC
[ "apiVersion: operator.cryostat.io/v1beta1 kind: ClusterCryostat metadata: name: cryostat-sample spec: enableCertManager: true installNamespace: cryostat targetNamespaces: - openshift-operators", "apiVersion: operator.cryostat.io/v1beta2 kind: Cryostat metadata: name: cryostat-sample namespace: openshift-operators spec: enableCertManager: true targetNamespaces: - openshift-operators", "apiVersion: operator.cryostat.io/v1beta1 kind: ClusterCryostat metadata: name: cryostat-sample spec: enableCertManager: true installNamespace: cryostat targetNamespaces: - openshift-operators", "apiVersion: operator.cryostat.io/v1beta2 kind: Cryostat metadata: name: cryostat-sample namespace: openshift-operators spec: enableCertManager: true targetNamespaces: - openshift-operators (or Target Namespace)" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/migrating_cryostat_2.4_to_cryostat_3.0/migration_process
8.149. nfs-utils
8.149. nfs-utils 8.149.1. RHBA-2014:1407 - nfs-utils bug fix and enhancement update Updated nfs-utils packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The nfs-utils packages provide a daemon for the kernel Network File System (NFS) server and related tools, which provides better performance than the traditional Linux NFS server used by most users. These packages contain the mount.nfs, umount.nfs, and showmount programs. Bug Fixes BZ# 1007195 Prior to this update, the nfsiostat utility was run in the background with the stdout stream redirected to a file. As a consequence, the data was not displayed in a timely matter. This update clears stdout periodically to ensure the buffered output of nfsiostat does not get lost if the nfsiostat process is terminated. BZ# 1033708 The nfs-utils packages were moved to an in-kernel keyring to store the ID mappings needed for NFSv4. However, the kernel key is too small for large enterprise environments. With this update, the nfsidmap utility, used by the kernel to do ID mapping, has been changed to use multiple keyrings. BZ# 1040135 Previously, the rpc.idmapd name mapping daemon returned a warning message after failing to open communication with a client mount. As the warning message was harmless and unnecessary, rpc.idmapd now displays the message only if the user passes the "--verbose" option on the command line. BZ# 1018358 The starting of the rpc.statd utility caused a creation of an extra privileged UDP socket. As a consequence, rpc.statd listened on a random port on all the interfaces, which is required only for internal communication with the rpc.lockd utility. With this update, rpc.statd no longer opens an extra socket in the described situation, and instead opens an extra random port on the loopback address only. BZ# 1075224 The starting of the rpc.statd utility caused messages being flooded to the log. With this update, the socket is kept open until another one is found. As a result, the same port is not reused, and messages are no longer flooding the log in the described situation. BZ# 1079047 When root squashing was enabled and world execute permissions were disabled, using the "-o remount" option of the mount utility caused the mount attempt to fail. This update fixes the chk_mountpoint() function, and the mount utility now checks only execute permissions for unprivileged users, thus fixing this bug. BZ# 1081208 When the rpc.gssd daemon was started, a zero lifetime was sent to the kernel, which then guessed and used the default lifetime. To fix this bug, the correct lifetime is now passed to the kernel, which uses it for timeouts in GSS contexts. BZ# 1087878 Previously, the rpcdebug utility did not work correctly when used with the NFS module and the "state" option. This update allows the "state" option to be used with the NFS module, and NFS state debugging can now be set as expected. BZ# 1113204 Previously, machines with multiple disks made the rpc.mountd utility use 100% of the CPU for 30 to 40 minutes, needlessly scanning the disks. The libblkid daemon usage has been optimized, and rpc.mountd no longer causes downtime in this scenario. BZ# 1136814 Due to a wrong indentation in its code, the nfsiostat utility failed to start. This update adds the correct indentation, and nfsiostat now starts as expected. In addition, this update adds the following Enhancements BZ# 918319 This update enhances the nfsmount.conf file manual page to include syntax for mount options. Now, the reader has a better understanding on how to set variables in the configuration file. BZ# 1112776 IPv6 is now a supported address type, and the exportfs utility can thus use IPv6 addresses to export file systems. BZ# 869684 This update adds descriptions on what each value of the column in the output of the nfsiostat utility means. Users of nfs-utils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. After installing this update, the nfs service will be restarted automatically.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/nfs-utils
Chapter 4. Kernel Module Management Operator
Chapter 4. Kernel Module Management Operator Learn about the Kernel Module Management (KMM) Operator and how you can use it to deploy out-of-tree kernel modules and device plugins on OpenShift Container Platform clusters. 4.1. About the Kernel Module Management Operator The Kernel Module Management (KMM) Operator manages, builds, signs, and deploys out-of-tree kernel modules and device plugins on OpenShift Container Platform clusters. KMM adds a new Module CRD which describes an out-of-tree kernel module and its associated device plugin. You can use Module resources to configure how to load the module, define ModuleLoader images for kernel versions, and include instructions for building and signing modules for specific kernel versions. KMM is designed to accommodate multiple kernel versions at once for any kernel module, allowing for seamless node upgrades and reduced application downtime. 4.2. Installing the Kernel Module Management Operator As a cluster administrator, you can install the Kernel Module Management (KMM) Operator by using the OpenShift CLI or the web console. The KMM Operator is supported on OpenShift Container Platform 4.12 and later. Installing KMM on version 4.11 does not require specific additional steps. For details on installing KMM on version 4.10 and earlier, see the section "Installing the Kernel Module Management Operator on earlier versions of OpenShift Container Platform". 4.2.1. Installing the Kernel Module Management Operator using the web console As a cluster administrator, you can install the Kernel Module Management (KMM) Operator using the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Install the Kernel Module Management Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Select Kernel Module Management Operator from the list of available Operators, and then click Install . On the Install Operator page, select the Installation mode as A specific namespace on the cluster . From the Installed Namespace list, select the openshift-kmm namespace. Click Install . Verification To verify that KMM Operator installed successfully: Navigate to the Operators Installed Operators page. Ensure that Kernel Module Management Operator is listed in the openshift-kmm project with a Status of InstallSucceeded . Note During installation, an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. Troubleshooting To troubleshoot issues with Operator installation: Navigate to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Navigate to the Workloads Pods page and check the logs for pods in the openshift-kmm project. 4.2.2. Installing the Kernel Module Management Operator by using the CLI As a cluster administrator, you can install the Kernel Module Management (KMM) Operator by using the OpenShift CLI. Prerequisites You have a running OpenShift Container Platform cluster. You installed the OpenShift CLI ( oc ). You are logged into the OpenShift CLI as a user with cluster-admin privileges. Procedure Install KMM in the openshift-kmm namespace: Create the following Namespace CR and save the YAML file, for example, kmm-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-kmm Create the following OperatorGroup CR and save the YAML file, for example, kmm-op-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm Create the following Subscription CR and save the YAML file, for example, kmm-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0 Create the subscription object by running the following command: USD oc create -f kmm-sub.yaml Verification To verify that the Operator deployment is successful, run the following command: USD oc get -n openshift-kmm deployments.apps kmm-operator-controller-manager Example output NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller-manager 1/1 1 1 97s The Operator is available. 4.2.3. Installing the Kernel Module Management Operator on earlier versions of OpenShift Container Platform The KMM Operator is supported on OpenShift Container Platform 4.12 and later. For version 4.10 and earlier, you must create a new SecurityContextConstraint object and bind it to the Operator's ServiceAccount . As a cluster administrator, you can install the Kernel Module Management (KMM) Operator by using the OpenShift CLI. Prerequisites You have a running OpenShift Container Platform cluster. You installed the OpenShift CLI ( oc ). You are logged into the OpenShift CLI as a user with cluster-admin privileges. Procedure Install KMM in the openshift-kmm namespace: Create the following Namespace CR and save the YAML file, for example, kmm-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: openshift-kmm Create the following SecurityContextConstraint object and save the YAML file, for example, kmm-security-constraint.yaml : allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: false allowPrivilegedContainer: false allowedCapabilities: - NET_BIND_SERVICE apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: [] kind: SecurityContextConstraints metadata: name: restricted-v2 priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - ALL runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs seccompProfiles: - runtime/default supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret Bind the SecurityContextConstraint object to the Operator's ServiceAccount by running the following commands: USD oc apply -f kmm-security-constraint.yaml USD oc adm policy add-scc-to-user kmm-security-constraint -z kmm-operator-controller-manager -n openshift-kmm Create the following OperatorGroup CR and save the YAML file, for example, kmm-op-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm Create the following Subscription CR and save the YAML file, for example, kmm-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0 Create the subscription object by running the following command: USD oc create -f kmm-sub.yaml Verification To verify that the Operator deployment is successful, run the following command: USD oc get -n openshift-kmm deployments.apps kmm-operator-controller-manager Example output NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller-manager 1/1 1 1 97s The Operator is available. 4.3. Kernel module deployment For each Module resource, Kernel Module Management (KMM) can create a number of DaemonSet resources: One ModuleLoader DaemonSet per compatible kernel version running in the cluster. One device plugin DaemonSet , if configured. The module loader daemon set resources run ModuleLoader images to load kernel modules. A module loader image is an OCI image that contains the .ko files and both the modprobe and sleep binaries. When the module loader pod is created, the pod runs modprobe to insert the specified module into the kernel. It then enters a sleep state until it is terminated. When that happens, the ExecPreStop hook runs modprobe -r to unload the kernel module. If the .spec.devicePlugin attribute is configured in a Module resource, then KMM creates a device plugin daemon set in the cluster. That daemon set targets: Nodes that match the .spec.selector of the Module resource. Nodes with the kernel module loaded (where the module loader pod is in the Ready condition). 4.3.1. The Module custom resource definition The Module custom resource definition (CRD) represents a kernel module that can be loaded on all or select nodes in the cluster, through a module loader image. A Module custom resource (CR) specifies one or more kernel versions with which it is compatible, and a node selector. The compatible versions for a Module resource are listed under .spec.moduleLoader.container.kernelMappings . A kernel mapping can either match a literal version, or use regexp to match many of them at the same time. The reconciliation loop for the Module resource runs the following steps: List all nodes matching .spec.selector . Build a set of all kernel versions running on those nodes. For each kernel version: Go through .spec.moduleLoader.container.kernelMappings and find the appropriate container image name. If the kernel mapping has build or sign defined and the container image does not already exist, run the build, the signing job, or both, as needed. Create a module loader daemon set with the container image determined in the step. If .spec.devicePlugin is defined, create a device plugin daemon set using the configuration specified under .spec.devicePlugin.container . Run garbage-collect on: Existing daemon set resources targeting kernel versions that are not run by any node in the cluster. Successful build jobs. Successful signing jobs. 4.3.2. Security and permissions Important Loading kernel modules is a highly sensitive operation. After they are loaded, kernel modules have all possible permissions to do any kind of operation on the node. 4.3.2.1. ServiceAccounts and SecurityContextConstraints Kernel Module Management (KMM) creates a privileged workload to load the kernel modules on nodes. That workload needs ServiceAccounts allowed to use the privileged SecurityContextConstraint (SCC) resource. The authorization model for that workload depends on the namespace of the Module resource, as well as its spec. If the .spec.moduleLoader.serviceAccountName or .spec.devicePlugin.serviceAccountName fields are set, they are always used. If those fields are not set, then: If the Module resource is created in the operator's namespace ( openshift-kmm by default), then KMM uses its default, powerful ServiceAccounts to run the daemon sets. If the Module resource is created in any other namespace, then KMM runs the daemon sets as the namespace's default ServiceAccount . The Module resource cannot run a privileged workload unless you manually enable it to use the privileged SCC. Important openshift-kmm is a trusted namespace. When setting up RBAC permissions, remember that any user or ServiceAccount creating a Module resource in the openshift-kmm namespace results in KMM automatically running privileged workloads on potentially all nodes in the cluster. To allow any ServiceAccount to use the privileged SCC and therefore to run module loader or device plugin pods, use the following command: USD oc adm policy add-scc-to-user privileged -z "USD{serviceAccountName}" [ -n "USD{namespace}" ] 4.3.2.2. Pod security standards OpenShift runs a synchronization mechanism that sets the namespace Pod Security level automatically based on the security contexts in use. No action is needed. Additional resources Understanding and managing pod security admission . 4.3.3. Example Module CR The following is an annotated Module example: apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: <my_kmod> spec: moduleLoader: container: modprobe: moduleName: <my_kmod> 1 dirName: /opt 2 firmwarePath: /firmware 3 parameters: 4 - param=1 kernelMappings: 5 - literal: 6.0.15-300.fc37.x86_64 containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64 - regexp: '^.+\fc37\.x86_64USD' 6 containerImage: "some.other.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}" - regexp: '^.+USD' 7 containerImage: "some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}" build: buildArgs: 8 - name: ARG_NAME value: <some_value> secrets: - name: <some_kubernetes_secret> 9 baseImageRegistryTLS: 10 insecure: false insecureSkipTLSVerify: false 11 dockerfileConfigMap: 12 name: <my_kmod_dockerfile> sign: certSecret: name: <cert_secret> 13 keySecret: name: <key_secret> 14 filesToSign: - /opt/lib/modules/USD{KERNEL_FULL_VERSION}/<my_kmod>.ko registryTLS: 15 insecure: false 16 insecureSkipTLSVerify: false serviceAccountName: <sa_module_loader> 17 devicePlugin: 18 container: image: some.registry/org/device-plugin:latest 19 env: - name: MY_DEVICE_PLUGIN_ENV_VAR value: SOME_VALUE volumeMounts: 20 - mountPath: /some/mountPath name: <device_plugin_volume> volumes: 21 - name: <device_plugin_volume> configMap: name: <some_configmap> serviceAccountName: <sa_device_plugin> 22 imageRepoSecret: 23 name: <secret_name> selector: node-role.kubernetes.io/worker: "" 1 1 1 Required. 2 Optional. 3 Optional: Copies /firmware/* into /var/lib/firmware/ on the node. 4 Optional. 5 At least one kernel item is required. 6 For each node running a kernel matching the regular expression, KMM creates a DaemonSet resource running the image specified in containerImage with USD{KERNEL_FULL_VERSION} replaced with the kernel version. 7 For any other kernel, build the image using the Dockerfile in the my-kmod ConfigMap. 8 Optional. 9 Optional: A value for some-kubernetes-secret can be obtained from the build environment at /run/secrets/some-kubernetes-secret . 10 Optional: Avoid using this parameter. If set to true , the build is allowed to pull the image in the Dockerfile FROM instruction using plain HTTP. 11 Optional: Avoid using this parameter. If set to true , the build will skip any TLS server certificate validation when pulling the image in the Dockerfile FROM instruction using plain HTTP. 12 Required. 13 Required: A secret holding the public secureboot key with the key 'cert'. 14 Required: A secret holding the private secureboot key with the key 'key'. 15 Optional: Avoid using this parameter. If set to true , KMM will be allowed to check if the container image already exists using plain HTTP. 16 Optional: Avoid using this parameter. If set to true , KMM will skip any TLS server certificate validation when checking if the container image already exists. 17 Optional. 18 Optional. 19 Required: If the device plugin section is present. 20 Optional. 21 Optional. 22 Optional. 23 Optional: Used to pull module loader and device plugin images. 4.4. Using a ModuleLoader image Kernel Module Management (KMM) works with purpose-built module loader images. These are standard OCI images that must satisfy the following requirements: .ko files must be located in /opt/lib/modules/USD{KERNEL_VERSION} . modprobe and sleep binaries must be defined in the USDPATH variable. 4.4.1. Running depmod If your module loader image contains several kernel modules and if one of the modules depends on another module, it is best practice to run depmod at the end of the build process to generate dependencies and map files. Note You must have a Red Hat subscription to download the kernel-devel package. Procedure To generate modules.dep and .map files for a specific kernel version, run depmod -b /opt USD{KERNEL_VERSION} . 4.4.1.1. Example Dockerfile If you are building your image on OpenShift Container Platform, consider using the Driver Tool Kit (DTK). For further information, see using an entitled build . apiVersion: v1 kind: ConfigMap metadata: name: kmm-ci-dockerfile data: dockerfile: | ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_VERSION WORKDIR /usr/src RUN ["git", "clone", "https://github.com/rh-ecosystem-edge/kernel-module-management.git"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_VERSION}/build make all FROM registry.redhat.io/ubi8/ubi-minimal ARG KERNEL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN depmod -b /opt USD{KERNEL_VERSION} Additional resources Driver Toolkit . 4.4.2. Building in the cluster KMM can build module loader images in the cluster. Follow these guidelines: Provide build instructions using the build section of a kernel mapping. Copy the Dockerfile for your container image into a ConfigMap resource, under the dockerfile key. Ensure that the ConfigMap is located in the same namespace as the Module . KMM checks if the image name specified in the containerImage field exists. If it does, the build is skipped. Otherwise, KMM creates a Build resource to build your image. After the image is built, KMM proceeds with the Module reconciliation. See the following example. # ... - regexp: '^.+USD' containerImage: "some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}" build: buildArgs: 1 - name: ARG_NAME value: <some_value> secrets: 2 - name: <some_kubernetes_secret> 3 baseImageRegistryTLS: insecure: false 4 insecureSkipTLSVerify: false 5 dockerfileConfigMap: 6 name: <my_kmod_dockerfile> registryTLS: insecure: false 7 insecureSkipTLSVerify: false 8 1 Optional. 2 Optional. 3 Will be mounted in the build pod as /run/secrets/some-kubernetes-secret . 4 Optional: Avoid using this parameter. If set to true , the build will be allowed to pull the image in the Dockerfile FROM instruction using plain HTTP. 5 Optional: Avoid using this parameter. If set to true , the build will skip any TLS server certificate validation when pulling the image in the Dockerfile FROM instruction using plain HTTP. 6 Required. 7 Optional: Avoid using this parameter. If set to true , KMM will be allowed to check if the container image already exists using plain HTTP. 8 Optional: Avoid using this parameter. If set to true , KMM will skip any TLS server certificate validation when checking if the container image already exists. Additional resources Build configuration resources . 4.4.3. Using the Driver Toolkit The Driver Toolkit (DTK) is a convenient base image for building build module loader images. It contains tools and libraries for the OpenShift version currently running in the cluster. Procedure Use DTK as the first stage of a multi-stage Dockerfile . Build the kernel modules. Copy the .ko files into a smaller end-user image such as ubi-minimal . To leverage DTK in your in-cluster build, use the DTK_AUTO build argument. The value is automatically set by KMM when creating the Build resource. See the following example. ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_VERSION WORKDIR /usr/src RUN ["git", "clone", "https://github.com/rh-ecosystem-edge/kernel-module-management.git"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_VERSION}/build make all FROM registry.redhat.io/ubi8/ubi-minimal ARG KERNEL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN depmod -b /opt USD{KERNEL_VERSION} Additional resources Driver Toolkit . 4.5. Using signing with Kernel Module Management (KMM) On a Secure Boot enabled system, all kernel modules (kmods) must be signed with a public/private key-pair enrolled into the Machine Owner's Key (MOK) database. Drivers distributed as part of a distribution should already be signed by the distribution's private key, but for kernel modules build out-of-tree, KMM supports signing kernel modules using the sign section of the kernel mapping. For more details on using Secure Boot, see Generating a public and private key pair Prerequisites A public private key pair in the correct (DER) format. At least one secure-boot enabled node with the public key enrolled in its MOK database. Either a pre-built driver container image, or the source code and Dockerfile needed to build one in-cluster. 4.6. Adding the keys for secureboot To use KMM Kernel Module Management (KMM) to sign kernel modules, a certificate and private key are required. For details on how to create these, see Generating a public and private key pair . For details on how to extract the public and private key pair, see Signing kernel modules with the private key . Use steps 1 through 4 to extract the keys into files. Procedure Create the sb_cert.cer file that contains the certificate and the sb_cert.priv file that contains the private key: USD openssl req -x509 -new -nodes -utf8 -sha256 -days 36500 -batch -config configuration_file.config -outform DER -out my_signing_key_pub.der -keyout my_signing_key.priv Add the files by using one of the following methods: Add the files as secrets directly: USD oc create secret generic my-signing-key --from-file=key=<my_signing_key.priv> USD oc create secret generic my-signing-key-pub --from-file=cert=<my_signing_key_pub.der> Add the files by base64 encoding them: USD cat sb_cert.priv | base64 -w 0 > my_signing_key2.base64 USD cat sb_cert.cer | base64 -w 0 > my_signing_key_pub.base64 Add the encoded text to a YAML file: apiVersion: v1 kind: Secret metadata: name: my-signing-key-pub namespace: default 1 type: Opaque data: cert: <base64_encoded_secureboot_public_key> --- apiVersion: v1 kind: Secret metadata: name: my-signing-key namespace: default 2 type: Opaque data: key: <base64_encoded_secureboot_private_key> 1 2 namespace - Replace default with a valid namespace. Apply the YAML file: USD oc apply -f <yaml_filename> 4.6.1. Checking the keys After you have added the keys, you must check them to ensure they are set correctly. Procedure Check to ensure the public key secret is set correctly: USD oc get secret -o yaml <certificate secret name> | awk '/cert/{print USD2; exit}' | base64 -d | openssl x509 -inform der -text This should display a certificate with a Serial Number, Issuer, Subject, and more. Check to ensure the private key secret is set correctly: USD oc get secret -o yaml <private key secret name> | awk '/key/{print USD2; exit}' | base64 -d This should display the key enclosed in the -----BEGIN PRIVATE KEY----- and -----END PRIVATE KEY----- lines. 4.7. Signing a pre-built driver container Use this procedure if you have a pre-built image, such as an image either distributed by a hardware vendor or built elsewhere. The following YAML file adds the public/private key-pair as secrets with the required key names - key for the private key, cert for the public key. The cluster then pulls down the unsignedImage image, opens it, signs the kernel modules listed in filesToSign , adds them back, and pushes the resulting image as containerImage . Kernel Module Management (KMM) should then deploy the DaemonSet that loads the signed kmods onto all the nodes that match the selector. The driver containers should run successfully on any nodes that have the public key in their MOK database, and any nodes that are not secure-boot enabled, which ignore the signature. They should fail to load on any that have secure-boot enabled but do not have that key in their MOK database. Prerequisites The keySecret and certSecret secrets have been created. Procedure Apply the YAML file: --- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module spec: moduleLoader: serviceAccountName: default container: modprobe: 1 moduleName: '<your module name>' kernelMappings: # the kmods will be deployed on all nodes in the cluster with a kernel that matches the regexp - regexp: '^.*\.x86_64USD' # the container to produce containing the signed kmods containerImage: <image name e.g. quay.io/myuser/my-driver:<kernelversion>-signed> sign: # the image containing the unsigned kmods (we need this because we are not building the kmods within the cluster) unsignedImage: <image name e.g. quay.io/myuser/my-driver:<kernelversion> > keySecret: # a secret holding the private secureboot key with the key 'key' name: <private key secret name> certSecret: # a secret holding the public secureboot key with the key 'cert' name: <certificate secret name> filesToSign: # full path within the unsignedImage container to the kmod(s) to sign - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: # the name of a secret containing credentials to pull unsignedImage and push containerImage to the registry name: repo-pull-secret selector: kubernetes.io/arch: amd64 1 modprobe - The name of the kmod to load. 4.8. Building and signing a ModuleLoader container image Use this procedure if you have source code and must build your image first. The following YAML file builds a new container image using the source code from the repository. The image produced is saved back in the registry with a temporary name, and this temporary image is then signed using the parameters in the sign section. The temporary image name is based on the final image name and is set to be <containerImage>:<tag>-<namespace>_<module name>_kmm_unsigned . For example, using the following YAML file, Kernel Module Management (KMM) builds an image named example.org/repository/minimal-driver:final-default_example-module_kmm_unsigned containing the build with unsigned kmods and push it to the registry. Then it creates a second image named example.org/repository/minimal-driver:final that contains the signed kmods. It is this second image that is loaded by the DaemonSet object and deploys the kmods to the cluster nodes. After it is signed, the temporary image can be safely deleted from the registry. It will be rebuilt, if needed. Prerequisites The keySecret and certSecret secrets have been created. Procedure Apply the YAML file: --- apiVersion: v1 kind: ConfigMap metadata: name: example-module-dockerfile namespace: default 1 data: Dockerfile: | ARG DTK_AUTO ARG KERNEL_VERSION FROM USD{DTK_AUTO} as builder WORKDIR /build/ RUN git clone -b main --single-branch https://github.com/rh-ecosystem-edge/kernel-module-management.git WORKDIR kernel-module-management/ci/kmm-kmod/ RUN make FROM registry.access.redhat.com/ubi8/ubi:latest ARG KERNEL_VERSION RUN yum -y install kmod && yum clean all RUN mkdir -p /opt/lib/modules/USD{KERNEL_VERSION} COPY --from=builder /build/kernel-module-management/ci/kmm-kmod/*.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN /usr/sbin/depmod -b /opt --- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module namespace: default 2 spec: moduleLoader: serviceAccountName: default 3 container: modprobe: moduleName: simple_kmod kernelMappings: - regexp: '^.*\.x86_64USD' containerImage: < the name of the final driver container to produce> build: dockerfileConfigMap: name: example-module-dockerfile sign: keySecret: name: <private key secret name> certSecret: name: <certificate secret name> filesToSign: - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: 4 name: repo-pull-secret selector: # top-level selector kubernetes.io/arch: amd64 1 2 namespace - Replace default with a valid namespace. 3 serviceAccountName - The default serviceAccountName does not have the required permissions to run a module that is privileged. For information on creating a service account, see "Creating service accounts" in the "Additional resources" of this section. 4 imageRepoSecret - Used as imagePullSecrets in the DaemonSet object and to pull and push for the build and sign features. Additional resources For information on creating a service account, see Creating service accounts . 4.9. Debugging and troubleshooting If the kmods in your driver container are not signed or are signed with the wrong key, then the container can enter a PostStartHookError or CrashLoopBackOff status. You can verify by running the oc describe command on your container, which displays the following message in this scenario: modprobe: ERROR: could not insert '<your_kmod_name>': Required key not available 4.10. KMM firmware support Kernel modules sometimes need to load firmware files from the file system. KMM supports copying firmware files from the ModuleLoader image to the node's file system. The contents of .spec.moduleLoader.container.modprobe.firmwarePath are copied into the /var/lib/firmware path on the node before running the modprobe command to insert the kernel module. All files and empty directories are removed from that location before running the modprobe -r command to unload the kernel module, when the pod is terminated. Additional resources Creating a ModuleLoader image . 4.10.1. Configuring the lookup path on nodes On OpenShift Container Platform nodes, the set of default lookup paths for firmwares does not include the /var/lib/firmware path. Procedure Use the Machine Config Operator to create a MachineConfig custom resource (CR) that contains the /var/lib/firmware path: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 99-worker-kernel-args-firmware-path spec: kernelArguments: - 'firmware_class.path=/var/lib/firmware' 1 You can configure the label based on your needs. In the case of single-node OpenShift, use either control-pane or master objects. By applying the MachineConfig CR, the nodes are automatically rebooted. Additional resources Machine Config Operator . 4.10.2. Building a ModuleLoader image Procedure In addition to building the kernel module itself, include the binary firmware in the builder image: FROM registry.redhat.io/ubi8/ubi-minimal as builder # Build the kmod RUN ["mkdir", "/firmware"] RUN ["curl", "-o", "/firmware/firmware.bin", "https://artifacts.example.com/firmware.bin"] FROM registry.redhat.io/ubi8/ubi-minimal # Copy the kmod, install modprobe, run depmod COPY --from=builder /firmware /firmware 4.10.3. Tuning the Module resource Procedure Set .spec.moduleLoader.container.modprobe.firmwarePath in the Module custom resource (CR): apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: my-kmod spec: moduleLoader: container: modprobe: moduleName: my-kmod # Required firmwarePath: /firmware 1 1 Optional: Copies /firmware/* into /var/lib/firmware/ on the node. 4.11. Troubleshooting KMM When troubleshooting KMM installation issues, you can monitor logs to determine at which stage issues occur. Then, retrieve diagnostic data relevant to that stage. 4.11.1. Using the must-gather tool The oc adm must-gather command is the preferred way to collect a support bundle and provide debugging information to Red Hat Support. Collect specific information by running the command with the appropriate arguments as described in the following sections. Additional resources About the must-gather tool 4.11.1.1. Gathering data for KMM Procedure Gather the data for the KMM Operator controller manager: Set the MUST_GATHER_IMAGE variable: USD export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm kmm-operator-controller-manager -ojsonpath='{.spec.template.spec.containers[?(@.name=="manager")].env[?(@.name=="RELATED_IMAGES_MUST_GATHER")].value}') Note Use the -n <namespace> switch to specify a namespace if you installed KMM in a custom namespace. Run the must-gather tool: USD oc adm must-gather --image="USD{MUST_GATHER_IMAGE}" -- /usr/bin/gather View the Operator logs: USD oc logs -fn openshift-kmm deployments/kmm-operator-controller-manager Example 4.1. Example output I0228 09:36:37.352405 1 request.go:682] Waited for 1.001998746s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/machine.openshift.io/v1beta1?timeout=32s I0228 09:36:40.767060 1 listener.go:44] kmm/controller-runtime/metrics "msg"="Metrics server is starting to listen" "addr"="127.0.0.1:8080" I0228 09:36:40.769483 1 main.go:234] kmm/setup "msg"="starting manager" I0228 09:36:40.769907 1 internal.go:366] kmm "msg"="Starting server" "addr"={"IP":"127.0.0.1","Port":8080,"Zone":""} "kind"="metrics" "path"="/metrics" I0228 09:36:40.770025 1 internal.go:366] kmm "msg"="Starting server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe" I0228 09:36:40.770128 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm/kmm.sigs.x-k8s.io... I0228 09:36:40.784396 1 leaderelection.go:258] successfully acquired lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784876 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1beta1.Module" I0228 09:36:40.784925 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.DaemonSet" I0228 09:36:40.784968 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.Build" I0228 09:36:40.785001 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.Job" I0228 09:36:40.785025 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" "source"="kind source: *v1.Node" I0228 09:36:40.785039 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="Module" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="Module" I0228 09:36:40.785458 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PodNodeModule" "controllerGroup"="" "controllerKind"="Pod" "source"="kind source: *v1.Pod" I0228 09:36:40.786947 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1beta1.PreflightValidation" I0228 09:36:40.787406 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1.Build" I0228 09:36:40.787474 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1.Job" I0228 09:36:40.787488 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" "source"="kind source: *v1beta1.Module" I0228 09:36:40.787603 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="NodeKernel" "controllerGroup"="" "controllerKind"="Node" "source"="kind source: *v1.Node" I0228 09:36:40.787634 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="NodeKernel" "controllerGroup"="" "controllerKind"="Node" I0228 09:36:40.787680 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="PreflightValidation" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidation" I0228 09:36:40.785607 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "source"="kind source: *v1.ImageStream" I0228 09:36:40.787822 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="preflightvalidationocp" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidationOCP" "source"="kind source: *v1beta1.PreflightValidationOCP" I0228 09:36:40.787853 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" I0228 09:36:40.787879 1 controller.go:185] kmm "msg"="Starting EventSource" "controller"="preflightvalidationocp" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidationOCP" "source"="kind source: *v1beta1.PreflightValidation" I0228 09:36:40.787905 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="preflightvalidationocp" "controllerGroup"="kmm.sigs.x-k8s.io" "controllerKind"="PreflightValidationOCP" I0228 09:36:40.786489 1 controller.go:193] kmm "msg"="Starting Controller" "controller"="PodNodeModule" "controllerGroup"="" "controllerKind"="Pod" 4.11.1.2. Gathering data for KMM-Hub Procedure Gather the data for the KMM Operator hub controller manager: Set the MUST_GATHER_IMAGE variable: USD export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm-hub kmm-operator-hub-controller-manager -ojsonpath='{.spec.template.spec.containers[?(@.name=="manager")].env[?(@.name=="RELATED_IMAGES_MUST_GATHER")].value}') Note Use the -n <namespace> switch to specify a namespace if you installed KMM in a custom namespace. Run the must-gather tool: USD oc adm must-gather --image="USD{MUST_GATHER_IMAGE}" -- /usr/bin/gather -u View the Operator logs: USD oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller-manager Example 4.2. Example output I0417 11:34:08.807472 1 request.go:682] Waited for 1.023403273s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/tuned.openshift.io/v1?timeout=32s I0417 11:34:12.373413 1 listener.go:44] kmm-hub/controller-runtime/metrics "msg"="Metrics server is starting to listen" "addr"="127.0.0.1:8080" I0417 11:34:12.376253 1 main.go:150] kmm-hub/setup "msg"="Adding controller" "name"="ManagedClusterModule" I0417 11:34:12.376621 1 main.go:186] kmm-hub/setup "msg"="starting manager" I0417 11:34:12.377690 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io... I0417 11:34:12.378078 1 internal.go:366] kmm-hub "msg"="Starting server" "addr"={"IP":"127.0.0.1","Port":8080,"Zone":""} "kind"="metrics" "path"="/metrics" I0417 11:34:12.378222 1 internal.go:366] kmm-hub "msg"="Starting server" "addr"={"IP":"::","Port":8081,"Zone":""} "kind"="health probe" I0417 11:34:12.395703 1 leaderelection.go:258] successfully acquired lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.396334 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1beta1.ManagedClusterModule" I0417 11:34:12.396403 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.ManifestWork" I0417 11:34:12.396430 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.Build" I0417 11:34:12.396469 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.Job" I0417 11:34:12.396522 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "source"="kind source: *v1.ManagedCluster" I0417 11:34:12.396543 1 controller.go:193] kmm-hub "msg"="Starting Controller" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" I0417 11:34:12.397175 1 controller.go:185] kmm-hub "msg"="Starting EventSource" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "source"="kind source: *v1.ImageStream" I0417 11:34:12.397221 1 controller.go:193] kmm-hub "msg"="Starting Controller" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" I0417 11:34:12.498335 1 filter.go:196] kmm-hub "msg"="Listing all ManagedClusterModules" "managedcluster"="local-cluster" I0417 11:34:12.498570 1 filter.go:205] kmm-hub "msg"="Listed ManagedClusterModules" "count"=0 "managedcluster"="local-cluster" I0417 11:34:12.498629 1 filter.go:238] kmm-hub "msg"="Adding reconciliation requests" "count"=0 "managedcluster"="local-cluster" I0417 11:34:12.498687 1 filter.go:196] kmm-hub "msg"="Listing all ManagedClusterModules" "managedcluster"="sno1-0" I0417 11:34:12.498750 1 filter.go:205] kmm-hub "msg"="Listed ManagedClusterModules" "count"=0 "managedcluster"="sno1-0" I0417 11:34:12.498801 1 filter.go:238] kmm-hub "msg"="Adding reconciliation requests" "count"=0 "managedcluster"="sno1-0" I0417 11:34:12.501947 1 controller.go:227] kmm-hub "msg"="Starting workers" "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "worker count"=1 I0417 11:34:12.501948 1 controller.go:227] kmm-hub "msg"="Starting workers" "controller"="ManagedClusterModule" "controllerGroup"="hub.kmm.sigs.x-k8s.io" "controllerKind"="ManagedClusterModule" "worker count"=1 I0417 11:34:12.502285 1 imagestream_reconciler.go:50] kmm-hub "msg"="registered imagestream info mapping" "ImageStream"={"name":"driver-toolkit","namespace":"openshift"} "controller"="imagestream" "controllerGroup"="image.openshift.io" "controllerKind"="ImageStream" "dtkImage"="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df42b4785a7a662b30da53bdb0d206120cf4d24b45674227b16051ba4b7c3934" "name"="driver-toolkit" "namespace"="openshift" "osImageVersion"="412.86.202302211547-0" "reconcileID"="e709ff0a-5664-4007-8270-49b5dff8bae9" 4.12. KMM hub and spoke In hub and spoke scenarios, many spoke clusters are connected to a central, powerful hub cluster. Kernel Module Management (KMM) depends on Red Hat Advanced Cluster Management (RHACM) to operate in hub and spoke environments. KMM is compatible with hub and spoke environments through decoupling KMM features. A ManagedClusterModule Custom Resource Definition (CRD) is provided to wrap the existing Module CRD and extend it to select Spoke clusters. Also provided is KMM-Hub, a new standalone controller that builds images and signs modules on the hub cluster. In hub and spoke setups, spokes are focused, resource-constrained clusters that are centrally managed by a hub cluster. Spokes run the single-cluster edition of KMM, with those resource-intensive features disabled. To adapt KMM to this environment, you should reduce the workload running on the spokes to the minimum, while the hub takes care of the expensive tasks. Building kernel module images and signing the .ko files, should run on the hub. The scheduling of the Module Loader and Device Plugin DaemonSets can only happen on the spokes. Additional resources Red Hat Advanced Cluster Management (RHACM) 4.12.1. KMM-Hub The KMM project provides KMM-Hub, an edition of KMM dedicated to hub clusters. KMM-Hub monitors all kernel versions running on the spokes and determines the nodes on the cluster that should receive a kernel module. KMM-Hub runs all compute-intensive tasks such as image builds and kmod signing, and prepares the trimmed-down Module to be transferred to the spokes through RHACM. Note KMM-Hub cannot be used to load kernel modules on the hub cluster. Install the regular edition of KMM to load kernel modules. Additional resources Installing KMM 4.12.2. Installing KMM-Hub You can use one of the following methods to install KMM-Hub: Using the Operator Lifecycle Manager (OLM) Creating KMM resources Additional resources KMM Operator bundle 4.12.2.1. Installing KMM-Hub using the Operator Lifecycle Manager Use the Operators section of the OpenShift console to install KMM-Hub. 4.12.2.2. Installing KMM-Hub by creating KMM resources Procedure If you want to install KMM-Hub programmatically, you can use the following resources to create the Namespace , OperatorGroup and Subscription resources: --- apiVersion: v1 kind: Namespace metadata: name: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub spec: channel: stable installPlanApproval: Automatic name: kernel-module-management-hub source: redhat-operators sourceNamespace: openshift-marketplace 4.12.3. Using the ManagedClusterModule CRD Use the ManagedClusterModule Custom Resource Definition (CRD) to configure the deployment of kernel modules on spoke clusters. This CRD is cluster-scoped, wraps a Module spec and adds the following additional fields: apiVersion: hub.kmm.sigs.x-k8s.io/v1beta1 kind: ManagedClusterModule metadata: name: <my-mcm> # No namespace, because this resource is cluster-scoped. spec: moduleSpec: 1 selector: 2 node-wants-my-mcm: 'true' spokeNamespace: <some-namespace> 3 selector: 4 wants-my-mcm: 'true' 1 moduleSpec : Contains moduleLoader and devicePlugin sections, similar to a Module resource. 2 Selects nodes within the ManagedCluster . 3 Specifies in which namespace the Module should be created. 4 Selects ManagedCluster objects. If build or signing instructions are present in .spec.moduleSpec , those pods are run on the hub cluster in the operator's namespace. When the .spec.selector matches one or more ManagedCluster resources, then KMM-Hub creates a ManifestWork resource in the corresponding namespace(s). ManifestWork contains a trimmed-down Module resource, with kernel mappings preserved but all build and sign subsections are removed. containerImage fields that contain image names ending with a tag are replaced with their digest equivalent. 4.12.4. Running KMM on the spoke After installing KMM on the spoke, no further action is required. Create a ManagedClusterModule object from the hub to deploy kernel modules on spoke clusters. Procedure You can install KMM on the spokes cluster through a RHACM Policy object. In addition to installing KMM from the Operator hub and running it in a lightweight spoke mode, the Policy configures additional RBAC required for the RHACM agent to be able to manage Module resources. Use the following RHACM policy to install KMM on spoke clusters: 1 The spec.clusterSelector field can be customized to target select clusters only.
[ "apiVersion: v1 kind: Namespace metadata: name: openshift-kmm", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0", "oc create -f kmm-sub.yaml", "oc get -n openshift-kmm deployments.apps kmm-operator-controller-manager", "NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller-manager 1/1 1 1 97s", "apiVersion: v1 kind: Namespace metadata: name: openshift-kmm", "allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: false allowPrivilegedContainer: false allowedCapabilities: - NET_BIND_SERVICE apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: [] kind: SecurityContextConstraints metadata: name: restricted-v2 priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - ALL runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs seccompProfiles: - runtime/default supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret", "oc apply -f kmm-security-constraint.yaml", "oc adm policy add-scc-to-user kmm-security-constraint -z kmm-operator-controller-manager -n openshift-kmm", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management namespace: openshift-kmm", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: release-1.0 installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: kernel-module-management.v1.0.0", "oc create -f kmm-sub.yaml", "oc get -n openshift-kmm deployments.apps kmm-operator-controller-manager", "NAME READY UP-TO-DATE AVAILABLE AGE kmm-operator-controller-manager 1/1 1 1 97s", "oc adm policy add-scc-to-user privileged -z \"USD{serviceAccountName}\" [ -n \"USD{namespace}\" ]", "apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: <my_kmod> spec: moduleLoader: container: modprobe: moduleName: <my_kmod> 1 dirName: /opt 2 firmwarePath: /firmware 3 parameters: 4 - param=1 kernelMappings: 5 - literal: 6.0.15-300.fc37.x86_64 containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64 - regexp: '^.+\\fc37\\.x86_64USD' 6 containerImage: \"some.other.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" - regexp: '^.+USD' 7 containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" build: buildArgs: 8 - name: ARG_NAME value: <some_value> secrets: - name: <some_kubernetes_secret> 9 baseImageRegistryTLS: 10 insecure: false insecureSkipTLSVerify: false 11 dockerfileConfigMap: 12 name: <my_kmod_dockerfile> sign: certSecret: name: <cert_secret> 13 keySecret: name: <key_secret> 14 filesToSign: - /opt/lib/modules/USD{KERNEL_FULL_VERSION}/<my_kmod>.ko registryTLS: 15 insecure: false 16 insecureSkipTLSVerify: false serviceAccountName: <sa_module_loader> 17 devicePlugin: 18 container: image: some.registry/org/device-plugin:latest 19 env: - name: MY_DEVICE_PLUGIN_ENV_VAR value: SOME_VALUE volumeMounts: 20 - mountPath: /some/mountPath name: <device_plugin_volume> volumes: 21 - name: <device_plugin_volume> configMap: name: <some_configmap> serviceAccountName: <sa_device_plugin> 22 imageRepoSecret: 23 name: <secret_name> selector: node-role.kubernetes.io/worker: \"\"", "apiVersion: v1 kind: ConfigMap metadata: name: kmm-ci-dockerfile data: dockerfile: | ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_VERSION}/build make all FROM registry.redhat.io/ubi8/ubi-minimal ARG KERNEL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN depmod -b /opt USD{KERNEL_VERSION}", "- regexp: '^.+USD' containerImage: \"some.registry/org/<my_kmod>:USD{KERNEL_FULL_VERSION}\" build: buildArgs: 1 - name: ARG_NAME value: <some_value> secrets: 2 - name: <some_kubernetes_secret> 3 baseImageRegistryTLS: insecure: false 4 insecureSkipTLSVerify: false 5 dockerfileConfigMap: 6 name: <my_kmod_dockerfile> registryTLS: insecure: false 7 insecureSkipTLSVerify: false 8", "ARG DTK_AUTO FROM USD{DTK_AUTO} as builder ARG KERNEL_VERSION WORKDIR /usr/src RUN [\"git\", \"clone\", \"https://github.com/rh-ecosystem-edge/kernel-module-management.git\"] WORKDIR /usr/src/kernel-module-management/ci/kmm-kmod RUN KERNEL_SRC_DIR=/lib/modules/USD{KERNEL_VERSION}/build make all FROM registry.redhat.io/ubi8/ubi-minimal ARG KERNEL_VERSION RUN microdnf install kmod COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_a.ko /opt/lib/modules/USD{KERNEL_VERSION}/ COPY --from=builder /usr/src/kernel-module-management/ci/kmm-kmod/kmm_ci_b.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN depmod -b /opt USD{KERNEL_VERSION}", "openssl req -x509 -new -nodes -utf8 -sha256 -days 36500 -batch -config configuration_file.config -outform DER -out my_signing_key_pub.der -keyout my_signing_key.priv", "oc create secret generic my-signing-key --from-file=key=<my_signing_key.priv>", "oc create secret generic my-signing-key-pub --from-file=cert=<my_signing_key_pub.der>", "cat sb_cert.priv | base64 -w 0 > my_signing_key2.base64", "cat sb_cert.cer | base64 -w 0 > my_signing_key_pub.base64", "apiVersion: v1 kind: Secret metadata: name: my-signing-key-pub namespace: default 1 type: Opaque data: cert: <base64_encoded_secureboot_public_key> --- apiVersion: v1 kind: Secret metadata: name: my-signing-key namespace: default 2 type: Opaque data: key: <base64_encoded_secureboot_private_key>", "oc apply -f <yaml_filename>", "oc get secret -o yaml <certificate secret name> | awk '/cert/{print USD2; exit}' | base64 -d | openssl x509 -inform der -text", "oc get secret -o yaml <private key secret name> | awk '/key/{print USD2; exit}' | base64 -d", "--- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module spec: moduleLoader: serviceAccountName: default container: modprobe: 1 moduleName: '<your module name>' kernelMappings: # the kmods will be deployed on all nodes in the cluster with a kernel that matches the regexp - regexp: '^.*\\.x86_64USD' # the container to produce containing the signed kmods containerImage: <image name e.g. quay.io/myuser/my-driver:<kernelversion>-signed> sign: # the image containing the unsigned kmods (we need this because we are not building the kmods within the cluster) unsignedImage: <image name e.g. quay.io/myuser/my-driver:<kernelversion> > keySecret: # a secret holding the private secureboot key with the key 'key' name: <private key secret name> certSecret: # a secret holding the public secureboot key with the key 'cert' name: <certificate secret name> filesToSign: # full path within the unsignedImage container to the kmod(s) to sign - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: # the name of a secret containing credentials to pull unsignedImage and push containerImage to the registry name: repo-pull-secret selector: kubernetes.io/arch: amd64", "--- apiVersion: v1 kind: ConfigMap metadata: name: example-module-dockerfile namespace: default 1 data: Dockerfile: | ARG DTK_AUTO ARG KERNEL_VERSION FROM USD{DTK_AUTO} as builder WORKDIR /build/ RUN git clone -b main --single-branch https://github.com/rh-ecosystem-edge/kernel-module-management.git WORKDIR kernel-module-management/ci/kmm-kmod/ RUN make FROM registry.access.redhat.com/ubi8/ubi:latest ARG KERNEL_VERSION RUN yum -y install kmod && yum clean all RUN mkdir -p /opt/lib/modules/USD{KERNEL_VERSION} COPY --from=builder /build/kernel-module-management/ci/kmm-kmod/*.ko /opt/lib/modules/USD{KERNEL_VERSION}/ RUN /usr/sbin/depmod -b /opt --- apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: example-module namespace: default 2 spec: moduleLoader: serviceAccountName: default 3 container: modprobe: moduleName: simple_kmod kernelMappings: - regexp: '^.*\\.x86_64USD' containerImage: < the name of the final driver container to produce> build: dockerfileConfigMap: name: example-module-dockerfile sign: keySecret: name: <private key secret name> certSecret: name: <certificate secret name> filesToSign: - /opt/lib/modules/4.18.0-348.2.1.el8_5.x86_64/kmm_ci_a.ko imageRepoSecret: 4 name: repo-pull-secret selector: # top-level selector kubernetes.io/arch: amd64", "modprobe: ERROR: could not insert '<your_kmod_name>': Required key not available", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 99-worker-kernel-args-firmware-path spec: kernelArguments: - 'firmware_class.path=/var/lib/firmware'", "FROM registry.redhat.io/ubi8/ubi-minimal as builder Build the kmod RUN [\"mkdir\", \"/firmware\"] RUN [\"curl\", \"-o\", \"/firmware/firmware.bin\", \"https://artifacts.example.com/firmware.bin\"] FROM registry.redhat.io/ubi8/ubi-minimal Copy the kmod, install modprobe, run depmod COPY --from=builder /firmware /firmware", "apiVersion: kmm.sigs.x-k8s.io/v1beta1 kind: Module metadata: name: my-kmod spec: moduleLoader: container: modprobe: moduleName: my-kmod # Required firmwarePath: /firmware 1", "export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm kmm-operator-controller-manager -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGES_MUST_GATHER\")].value}')", "oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather", "oc logs -fn openshift-kmm deployments/kmm-operator-controller-manager", "I0228 09:36:37.352405 1 request.go:682] Waited for 1.001998746s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/machine.openshift.io/v1beta1?timeout=32s I0228 09:36:40.767060 1 listener.go:44] kmm/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0228 09:36:40.769483 1 main.go:234] kmm/setup \"msg\"=\"starting manager\" I0228 09:36:40.769907 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0228 09:36:40.770025 1 internal.go:366] kmm \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0228 09:36:40.770128 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784396 1 leaderelection.go:258] successfully acquired lease openshift-kmm/kmm.sigs.x-k8s.io I0228 09:36:40.784876 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.784925 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.DaemonSet\" I0228 09:36:40.784968 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.785001 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.785025 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.785039 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"Module\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"Module\" I0228 09:36:40.785458 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\" \"source\"=\"kind source: *v1.Pod\" I0228 09:36:40.786947 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787406 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Build\" I0228 09:36:40.787474 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1.Job\" I0228 09:36:40.787488 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" \"source\"=\"kind source: *v1beta1.Module\" I0228 09:36:40.787603 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" \"source\"=\"kind source: *v1.Node\" I0228 09:36:40.787634 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"NodeKernel\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Node\" I0228 09:36:40.787680 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PreflightValidation\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidation\" I0228 09:36:40.785607 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0228 09:36:40.787822 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidationOCP\" I0228 09:36:40.787853 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0228 09:36:40.787879 1 controller.go:185] kmm \"msg\"=\"Starting EventSource\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" \"source\"=\"kind source: *v1beta1.PreflightValidation\" I0228 09:36:40.787905 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"preflightvalidationocp\" \"controllerGroup\"=\"kmm.sigs.x-k8s.io\" \"controllerKind\"=\"PreflightValidationOCP\" I0228 09:36:40.786489 1 controller.go:193] kmm \"msg\"=\"Starting Controller\" \"controller\"=\"PodNodeModule\" \"controllerGroup\"=\"\" \"controllerKind\"=\"Pod\"", "export MUST_GATHER_IMAGE=USD(oc get deployment -n openshift-kmm-hub kmm-operator-hub-controller-manager -ojsonpath='{.spec.template.spec.containers[?(@.name==\"manager\")].env[?(@.name==\"RELATED_IMAGES_MUST_GATHER\")].value}')", "oc adm must-gather --image=\"USD{MUST_GATHER_IMAGE}\" -- /usr/bin/gather -u", "oc logs -fn openshift-kmm-hub deployments/kmm-operator-hub-controller-manager", "I0417 11:34:08.807472 1 request.go:682] Waited for 1.023403273s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/tuned.openshift.io/v1?timeout=32s I0417 11:34:12.373413 1 listener.go:44] kmm-hub/controller-runtime/metrics \"msg\"=\"Metrics server is starting to listen\" \"addr\"=\"127.0.0.1:8080\" I0417 11:34:12.376253 1 main.go:150] kmm-hub/setup \"msg\"=\"Adding controller\" \"name\"=\"ManagedClusterModule\" I0417 11:34:12.376621 1 main.go:186] kmm-hub/setup \"msg\"=\"starting manager\" I0417 11:34:12.377690 1 leaderelection.go:248] attempting to acquire leader lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.378078 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"127.0.0.1\",\"Port\":8080,\"Zone\":\"\"} \"kind\"=\"metrics\" \"path\"=\"/metrics\" I0417 11:34:12.378222 1 internal.go:366] kmm-hub \"msg\"=\"Starting server\" \"addr\"={\"IP\":\"::\",\"Port\":8081,\"Zone\":\"\"} \"kind\"=\"health probe\" I0417 11:34:12.395703 1 leaderelection.go:258] successfully acquired lease openshift-kmm-hub/kmm-hub.sigs.x-k8s.io I0417 11:34:12.396334 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1beta1.ManagedClusterModule\" I0417 11:34:12.396403 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManifestWork\" I0417 11:34:12.396430 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Build\" I0417 11:34:12.396469 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.Job\" I0417 11:34:12.396522 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"source\"=\"kind source: *v1.ManagedCluster\" I0417 11:34:12.396543 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" I0417 11:34:12.397175 1 controller.go:185] kmm-hub \"msg\"=\"Starting EventSource\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"source\"=\"kind source: *v1.ImageStream\" I0417 11:34:12.397221 1 controller.go:193] kmm-hub \"msg\"=\"Starting Controller\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" I0417 11:34:12.498335 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498570 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498629 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"local-cluster\" I0417 11:34:12.498687 1 filter.go:196] kmm-hub \"msg\"=\"Listing all ManagedClusterModules\" \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498750 1 filter.go:205] kmm-hub \"msg\"=\"Listed ManagedClusterModules\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.498801 1 filter.go:238] kmm-hub \"msg\"=\"Adding reconciliation requests\" \"count\"=0 \"managedcluster\"=\"sno1-0\" I0417 11:34:12.501947 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"worker count\"=1 I0417 11:34:12.501948 1 controller.go:227] kmm-hub \"msg\"=\"Starting workers\" \"controller\"=\"ManagedClusterModule\" \"controllerGroup\"=\"hub.kmm.sigs.x-k8s.io\" \"controllerKind\"=\"ManagedClusterModule\" \"worker count\"=1 I0417 11:34:12.502285 1 imagestream_reconciler.go:50] kmm-hub \"msg\"=\"registered imagestream info mapping\" \"ImageStream\"={\"name\":\"driver-toolkit\",\"namespace\":\"openshift\"} \"controller\"=\"imagestream\" \"controllerGroup\"=\"image.openshift.io\" \"controllerKind\"=\"ImageStream\" \"dtkImage\"=\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df42b4785a7a662b30da53bdb0d206120cf4d24b45674227b16051ba4b7c3934\" \"name\"=\"driver-toolkit\" \"namespace\"=\"openshift\" \"osImageVersion\"=\"412.86.202302211547-0\" \"reconcileID\"=\"e709ff0a-5664-4007-8270-49b5dff8bae9\"", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management-hub namespace: openshift-kmm-hub spec: channel: stable installPlanApproval: Automatic name: kernel-module-management-hub source: redhat-operators sourceNamespace: openshift-marketplace", "apiVersion: hub.kmm.sigs.x-k8s.io/v1beta1 kind: ManagedClusterModule metadata: name: <my-mcm> # No namespace, because this resource is cluster-scoped. spec: moduleSpec: 1 selector: 2 node-wants-my-mcm: 'true' spokeNamespace: <some-namespace> 3 selector: 4 wants-my-mcm: 'true'", "--- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: install-kmm spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-kmm spec: severity: high object-templates: - complianceType: mustonlyhave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-kmm - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kmm namespace: openshift-kmm spec: upgradeStrategy: Default - complianceType: mustonlyhave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kernel-module-management namespace: openshift-kmm spec: channel: stable config: env: - name: KMM_MANAGED value: \"1\" installPlanApproval: Automatic name: kernel-module-management source: redhat-operators sourceNamespace: openshift-marketplace - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kmm-module-manager rules: - apiGroups: [kmm.sigs.x-k8s.io] resources: [modules] verbs: [create, delete, get, list, patch, update, watch] - complianceType: mustonlyhave objectDefinition: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: klusterlet-kmm subjects: - kind: ServiceAccount name: klusterlet-work-sa namespace: open-cluster-management-agent roleRef: kind: ClusterRole name: kmm-module-manager apiGroup: rbac.authorization.k8s.io --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: all-managed-clusters spec: clusterSelector: 1 matchExpressions: [] --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: install-kmm placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: all-managed-clusters subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-kmm" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/specialized_hardware_and_driver_enablement/kernel-module-management-operator
Chapter 17. Virtual Networking
Chapter 17. Virtual Networking This chapter introduces the concepts needed to create, start, stop, remove, and modify virtual networks with libvirt. Additional information can be found in the libvirt reference chapter 17.1. Virtual Network Switches Libvirt virtual networking uses the concept of a virtual network switch . A virtual network switch is a software construct that operates on a host physical machine server, to which virtual machines (guests) connect. The network traffic for a guest is directed through this switch: Figure 17.1. Virtual network switch with two guests Linux host physical machine servers represent a virtual network switch as a network interface. When the libvirtd daemon ( libvirtd ) is first installed and started, the default network interface representing the virtual network switch is virbr0 . This virbr0 interface can be viewed with the ip command like any other interface:
[ "ip addr show virbr0 3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 1b:c4:94:cf:fd:17 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-virtual_networking
17.2. BIND
17.2. BIND This chapter covers BIND (Berkeley Internet Name Domain), the DNS server included in Red Hat Enterprise Linux. It focuses on the structure of its configuration files, and describes how to administer it both locally and remotely. 17.2.1. Configuring the named Service When the named service is started, it reads the configuration from the files as described in Table 17.1, "The named service configuration files" . Table 17.1. The named service configuration files Path Description /etc/named.conf The main configuration file. /etc/named/ An auxiliary directory for configuration files that are included in the main configuration file. The configuration file consists of a collection of statements with nested options surrounded by opening and closing curly brackets. Note that when editing the file, you have to be careful not to make any syntax error, otherwise the named service will not start. A typical /etc/named.conf file is organized as follows: Note If you have installed the bind-chroot package, the BIND service will run in the /var/named/chroot environment. In that case, the initialization script will mount the above configuration files using the mount --bind command, so that you can manage the configuration outside this environment. There is no need to copy anything into the /var/named/chroot directory because it is mounted automatically. This simplifies maintenance since you do not need to take any special care of BIND configuration files if it is run in a chroot environment. You can organize everything as you would with BIND not running in a chroot environment. The following directories are automatically mounted into /var/named/chroot if they are empty in the /var/named/chroot directory. They must be kept empty if you want them to be mounted into /var/named/chroot : /var/named /etc/pki/dnssec-keys /etc/named /usr/lib64/bind or /usr/lib/bind (architecture dependent). The following files are also mounted if the target file does not exist in /var/named/chroot . /etc/named.conf /etc/rndc.conf /etc/rndc.key /etc/named.rfc1912.zones /etc/named.dnssec.keys /etc/named.iscdlv.key /etc/named.root.key 17.2.1.1. Common Statement Types The following types of statements are commonly used in /etc/named.conf : acl The acl (Access Control List) statement allows you to define groups of hosts, so that they can be permitted or denied access to the nameserver. It takes the following form: The acl-name statement name is the name of the access control list, and the match-element option is usually an individual IP address (such as 10.0.1.1 ) or a CIDR (Classless Inter-Domain Routing) network notation (for example, 10.0.1.0/24 ). For a list of already defined keywords, see Table 17.2, "Predefined access control lists" . Table 17.2. Predefined access control lists Keyword Description any Matches every IP address. localhost Matches any IP address that is in use by the local system. localnets Matches any IP address on any network to which the local system is connected. none Does not match any IP address. The acl statement can be especially useful in conjunction with other statements such as options . Example 17.2, "Using acl in conjunction with options" defines two access control lists, black-hats and red-hats , and adds black-hats on the blacklist while granting red-hats a normal access. Example 17.2. Using acl in conjunction with options include The include statement allows you to include files in the /etc/named.conf , so that potentially sensitive data can be placed in a separate file with restricted permissions. It takes the following form: The file-name statement name is an absolute path to a file. Example 17.3. Including a file to /etc/named.conf options The options statement allows you to define global server configuration options as well as to set defaults for other statements. It can be used to specify the location of the named working directory, the types of queries allowed, and much more. It takes the following form: For a list of frequently used option directives, see Table 17.3, "Commonly used options" below. Table 17.3. Commonly used options Option Description allow-query Specifies which hosts are allowed to query the nameserver for authoritative resource records. It accepts an access control list, a collection of IP addresses, or networks in the CIDR notation. All hosts are allowed by default. allow-query-cache Specifies which hosts are allowed to query the nameserver for non-authoritative data such as recursive queries. Only localhost and localnets are allowed by default. blackhole Specifies which hosts are not allowed to query the nameserver. This option should be used when particular host or network floods the server with requests. The default option is none . directory Specifies a working directory for the named service. The default option is /var/named/ . dnssec-enable Specifies whether to return DNSSEC related resource records. The default option is yes . dnssec-validation Specifies whether to prove that resource records are authentic via DNSSEC. The default option is yes . forwarders Specifies a list of valid IP addresses for nameservers to which the requests should be forwarded for resolution. forward Specifies the behavior of the forwarders directive. It accepts the following options: first - The server will query the nameservers listed in the forwarders directive before attempting to resolve the name on its own. only - When unable to query the nameservers listed in the forwarders directive, the server will not attempt to resolve the name on its own. listen-on Specifies the IPv4 network interface on which to listen for queries. On a DNS server that also acts as a gateway, you can use this option to answer queries originating from a single network only. All IPv4 interfaces are used by default. listen-on-v6 Specifies the IPv6 network interface on which to listen for queries. On a DNS server that also acts as a gateway, you can use this option to answer queries originating from a single network only. All IPv6 interfaces are used by default. max-cache-size Specifies the maximum amount of memory to be used for server caches. When the limit is reached, the server causes records to expire prematurely so that the limit is not exceeded. In a server with multiple views, the limit applies separately to the cache of each view. The default option is 32M . notify Specifies whether to notify the secondary nameservers when a zone is updated. It accepts the following options: yes - The server will notify all secondary nameservers. no - The server will not notify any secondary nameserver. master-only - The server will notify primary server for the zone only. explicit - The server will notify only the secondary servers that are specified in the also-notify list within a zone statement. pid-file Specifies the location of the process ID file created by the named service. recursion Specifies whether to act as a recursive server. The default option is yes . statistics-file Specifies an alternate location for statistics files. The /var/named/named.stats file is used by default. Important To prevent distributed denial of service (DDoS) attacks, it is recommended that you use the allow-query-cache option to restrict recursive DNS services for a particular subset of clients only. See the BIND 9 Administrator Reference Manual referenced in Section 17.2.7.1, "Installed Documentation" , and the named.conf manual page for a complete list of available options. Example 17.4. Using the options statement zone The zone statement allows you to define the characteristics of a zone, such as the location of its configuration file and zone-specific options, and can be used to override the global options statements. It takes the following form: The zone-name attribute is the name of the zone, zone-class is the optional class of the zone, and option is a zone statement option as described in Table 17.4, "Commonly used options" . The zone-name attribute is particularly important, as it is the default value assigned for the USDORIGIN directive used within the corresponding zone file located in the /var/named/ directory. The named daemon appends the name of the zone to any non-fully qualified domain name listed in the zone file. For example, if a zone statement defines the namespace for example.com , use example.com as the zone-name so that it is placed at the end of host names within the example.com zone file. For more information about zone files, see Section 17.2.2, "Editing Zone Files" . Table 17.4. Commonly used options Option Description allow-query Specifies which clients are allowed to request information about this zone. This option overrides global allow-query option. All query requests are allowed by default. allow-transfer Specifies which secondary servers are allowed to request a transfer of the zone's information. All transfer requests are allowed by default. allow-update Specifies which hosts are allowed to dynamically update information in their zone. The default option is to deny all dynamic update requests. Note that you should be careful when allowing hosts to update information about their zone. Do not set IP addresses in this option unless the server is in the trusted network. Instead, use TSIG key as described in Section 17.2.5.3, "Transaction SIGnatures (TSIG)" . file Specifies the name of the file in the named working directory that contains the zone's configuration data. masters Specifies from which IP addresses to request authoritative zone information. This option is used only if the zone is defined as type slave . notify Specifies whether to notify the secondary nameservers when a zone is updated. It accepts the following options: yes - The server will notify all secondary nameservers. no - The server will not notify any secondary nameserver. master-only - The server will notify primary server for the zone only. explicit - The server will notify only the secondary servers that are specified in the also-notify list within a zone statement. type Specifies the zone type. It accepts the following options: delegation-only - Enforces the delegation status of infrastructure zones such as COM, NET, or ORG. Any answer that is received without an explicit or implicit delegation is treated as NXDOMAIN . This option is only applicable in TLDs (Top-Level Domain) or root zone files used in recursive or caching implementations. forward - Forwards all requests for information about this zone to other nameservers. hint - A special type of zone used to point to the root nameservers which resolve queries when a zone is not otherwise known. No configuration beyond the default is necessary with a hint zone. master - Designates the nameserver as authoritative for this zone. A zone should be set as the master if the zone's configuration files reside on the system. slave - Designates the nameserver as a slave server for this zone. Master server is specified in masters directive. Most changes to the /etc/named.conf file of a primary or secondary nameserver involve adding, modifying, or deleting zone statements, and only a small subset of zone statement options is usually needed for a nameserver to work efficiently. In Example 17.5, "A zone statement for a primary nameserver" , the zone is identified as example.com , the type is set to master , and the named service is instructed to read the /var/named/example.com.zone file. It also allows only a secondary nameserver ( 192.168.0.2 ) to transfer the zone. Example 17.5. A zone statement for a primary nameserver A secondary server's zone statement is slightly different. The type is set to slave , and the masters directive is telling named the IP address of the master server. In Example 17.6, "A zone statement for a secondary nameserver" , the named service is configured to query the primary server at the 192.168.0.1 IP address for information about the example.com zone. The received information is then saved to the /var/named/slaves/example.com.zone file. Note that you have to put all slave zones to /var/named/slaves directory, otherwise the service will fail to transfer the zone. Example 17.6. A zone statement for a secondary nameserver 17.2.1.2. Other Statement Types The following types of statements are less commonly used in /etc/named.conf : controls The controls statement allows you to configure various security requirements necessary to use the rndc command to administer the named service. See Section 17.2.3, "Using the rndc Utility" for more information on the rndc utility and its usage. key The key statement allows you to define a particular key by name. Keys are used to authenticate various actions, such as secure updates or the use of the rndc command. Two options are used with key : algorithm algorithm-name - The type of algorithm to be used (for example, hmac-md5 ). secret " key-value " - The encrypted key. See Section 17.2.3, "Using the rndc Utility" for more information on the rndc utility and its usage. logging The logging statement allows you to use multiple types of logs, so called channels . By using the channel option within the statement, you can construct a customized type of log with its own file name ( file ), size limit ( size ), versioning ( version ), and level of importance ( severity ). Once a customized channel is defined, a category option is used to categorize the channel and begin logging when the named service is restarted. By default, named sends standard messages to the rsyslog daemon, which places them in /var/log/messages . Several standard channels are built into BIND with various severity levels, such as default_syslog (which handles informational logging messages) and default_debug (which specifically handles debugging messages). A default category, called default , uses the built-in channels to do normal logging without any special configuration. Customizing the logging process can be a very detailed process and is beyond the scope of this chapter. For information on creating custom BIND logs, see the BIND 9 Administrator Reference Manual referenced in Section 17.2.7.1, "Installed Documentation" . server The server statement allows you to specify options that affect how the named service should respond to remote nameservers, especially with regard to notifications and zone transfers. The transfer-format option controls the number of resource records that are sent with each message. It can be either one-answer (only one resource record), or many-answers (multiple resource records). Note that while the many-answers option is more efficient, it is not supported by older versions of BIND. trusted-keys The trusted-keys statement allows you to specify assorted public keys used for secure DNS (DNSSEC). See Section 17.2.5.4, "DNS Security Extensions (DNSSEC)" for more information on this topic. view The view statement allows you to create special views depending upon which network the host querying the nameserver is on. This allows some hosts to receive one answer regarding a zone while other hosts receive totally different information. Alternatively, certain zones may only be made available to particular trusted hosts while non-trusted hosts can only make queries for other zones. Multiple views can be used as long as their names are unique. The match-clients option allows you to specify the IP addresses that apply to a particular view. If the options statement is used within a view, it overrides the already configured global options. Finally, most view statements contain multiple zone statements that apply to the match-clients list. Note that the order in which the view statements are listed is important, as the first statement that matches a particular client's IP address is used. For more information on this topic, see Section 17.2.5.1, "Multiple Views" . 17.2.1.3. Comment Tags Additionally to statements, the /etc/named.conf file can also contain comments. Comments are ignored by the named service, but can prove useful when providing additional information to a user. The following are valid comment tags: // Any text after the // characters to the end of the line is considered a comment. For example: # Any text after the # character to the end of the line is considered a comment. For example: /* and */ Any block of text enclosed in /* and */ is considered a comment. For example:
[ "statement-1 [\" statement-1-name \"] [ statement-1-class ] { option-1 ; option-2 ; option-N ; }; statement-2 [\" statement-2-name \"] [ statement-2-class ] { option-1 ; option-2 ; option-N ; }; statement-N [\" statement-N-name \"] [ statement-N-class ] { option-1 ; option-2 ; option-N ; };", "acl acl-name { match-element ; };", "acl black-hats { 10.0.2.0/24; 192.168.0.0/24; 1234:5678::9abc/24; }; acl red-hats { 10.0.1.0/24; }; options { blackhole { black-hats; }; allow-query { red-hats; }; allow-query-cache { red-hats; }; };", "include \" file-name \"", "include \"/etc/named.rfc1912.zones\";", "options { option ; };", "options { allow-query { localhost; }; listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { ::1; }; max-cache-size 256M; directory \"/var/named\"; statistics-file \"/var/named/data/named_stats.txt\"; recursion yes; dnssec-enable yes; dnssec-validation yes; };", "zone zone-name [ zone-class ] { option ; };", "zone \"example.com\" IN { type master; file \"example.com.zone\"; allow-transfer { 192.168.0.2; }; };", "zone \"example.com\" { type slave; file \"slaves/example.com.zone\"; masters { 192.168.0.1; }; };", "notify yes; // notify all secondary nameservers", "notify yes; # notify all secondary nameservers", "notify yes; /* notify all secondary nameservers */" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-bind
Chapter 10. Authentication for Enrolling Certificates
Chapter 10. Authentication for Enrolling Certificates This chapter covers how to enroll end entity certificates, how to create and manage server certificates, the authentication methods available in the Certificate System to use when enrolling end entity certificates, and how to set up those authentication methods. Enrollment is the process of issuing certificates to an end entity. The process is creating and submitting the request, authenticating the user requesting it, and then approving the request and issuing the certificate. The method used to authenticate the end entity determines the entire enrollment process. There are three ways that the Certificate System can authenticate an entity: In agent-approved enrollment, end-entity requests are sent to an agent for approval. The agent approves the certificate request. In automatic enrollment, end-entity requests are authenticated using a plug-in, and then the certificate request is processed; an agent is not involved in the enrollment process. In CMC enrollment , a third party application can create a request that is signed by an agent and then automatically processed. A Certificate Manager is initially configured for agent-approved enrollment and for CMC authentication. Automated enrollment is enabled by configuring one of the authentication plug-in modules. More than one authentication method can be configured in a single instance of a subsystem. Note An email can be automatically sent to an end entity when the certificate is issued for any authentication method by configuring automated notifications. See Chapter 12, Using Automated Notifications for more information on notifications. 10.1. Configuring Agent-Approved Enrollment The Certificate Manager is initially configured for agent-approved enrollment. An end entity makes a request which is sent to the agent queue for an agent's approval. An agent can modify request, change the status of the request, reject the request, or approve the request. Once the request is approved, the signed request is sent to the Certificate Manager for processing. The Certificate Manager processes the request and issues the certificate. The agent-approved enrollment method is not configurable. If a Certificate Manager is not configured for any other enrollment method, the server automatically sends all certificate-related requests to a queue where they await agent approval. This ensures that all requests that lack authentication credentials are sent to the request queue for agent approval. To use agent-approved enrollment, leave the authentication method blank in the profile's .cfg file. For example:
[ "auth.instance_id=" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/authentication_for_enrolling_certificates
1.5. Package Selection
1.5. Package Selection Use the %packages command to begin a kickstart file section that lists the packages you would like to install (this is for installations only, as package selection during upgrades is not supported). Packages can be specified by group or by individual package name. The installation program defines several groups that contain related packages. Refer to the RedHat/base/comps.xml file on the first Red Hat Enterprise Linux CD-ROM for a list of groups. Each group has an id, user visibility value, name, description, and package list. In the package list, the packages marked as mandatory are always installed if the group is selected, the packages marked default are selected by default if the group is selected, and the packages marked optional must be specifically selected even if the group is selected to be installed. In most cases, it is only necessary to list the desired groups and not individual packages. Note that the Core and Base groups are always selected by default, so it is not necessary to specify them in the %packages section. Here is an example %packages selection: As you can see, groups are specified, one to a line, starting with an @ symbol, a space, and then the full group name as given in the comps.xml file. Groups can also be specified using the id for the group, such as gnome-desktop . Specify individual packages with no additional characters (the dhcp line in the example above is an individual package). You can also specify which packages not to install from the default package list: The following options are available for the %packages option: --resolvedeps Install the listed packages and automatically resolve package dependencies. If this option is not specified and there are package dependencies, the automated installation pauses and prompts the user. For example: --ignoredeps Ignore the unresolved dependencies and install the listed packages without the dependencies. For example: --ignoremissing Ignore the missing packages and groups instead of halting the installation to ask if the installation should be aborted or continued. For example:
[ "%packages @ X Window System @ GNOME Desktop Environment @ Graphical Internet @ Sound and Video dhcp", "-autofs", "%packages --resolvedeps", "%packages --ignoredeps", "%packages --ignoremissing" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/kickstart_installations-package_selection
Chapter 6. RHEL 8.1.0 release
Chapter 6. RHEL 8.1.0 release 6.1. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 8.1. 6.1.1. Installer and image creation Modules can now be disabled during Kickstart installation With this enhancement, users can now disable a module to prevent the installation of packages from the module. To disable a module during Kickstart installation, use the command: module --name=foo --stream=bar --disable (BZ#1655523) Support for the repo.git section to blueprints is now available A new repo.git blueprint section allows users to include extra files in their image build. The files must be hosted in git repository that is accessible from the lorax-composer build server. ( BZ#1709594 ) Image Builder now supports image creation for more cloud providers With this update, the Image Builder expanded the number of Cloud Providers that the Image Builder can create an image for. As a result, now you can create RHEL images that can be deployed also on Google Cloud and Alibaba Cloud as well as run the custom instances on these platforms. ( BZ#1689140 ) 6.1.2. Software management dnf-utils has been renamed to yum-utils With this update, the dnf-utils package, that is a part of the YUM stack, has been renamed to yum-utils . For compatibility reasons, the package can still be installed using the dnf-utils name, and will automatically replace the original package when upgrading your system. (BZ#1722093) 6.1.3. Subscription management subscription-manager now reports the role, usage and add-ons values With this update, the subscription-manager can now display the Role, Usage and Add-ons values for each subscription available in the current organization, which is registered to either the Customer Portal or to the Satellite. To show the available subscriptions with the addition of Role, Usage and Add-ons values for those subscriptions use: To show the consumed subscriptions including the additional Role, Usage and Add-ons values use: (BZ#1665167) 6.1.4. Infrastructure services tuned rebased to version 2.12 The tuned packages have been upgraded to upstream version 2.12, which provides a number of bug fixes and enhancements over the version, notably: Handling of devices that have been removed and reattached has been fixed. Support for negation of CPU list has been added. Performance of runtime kernel parameter configuration has been improved by switching from the sysctl tool to a new implementation specific to Tuned . ( BZ#1685585 ) chrony rebased to version 3.5 The chrony packages have been upgraded to upstream version 3.5, which provides a number of bug fixes and enhancements over the version, notably: Support for more accurate synchronization of the system clock with hardware timestamping in RHEL 8.1 kernel has been added. Hardware timestamping has received significant improvements. The range of available polling intervals has been extended. The filter option has been added to NTP sources. ( BZ#1685469 ) New FRRouting routing protocol stack is available With this update, Quagga has been replaced by Free Range Routing ( FRRouting , or FRR ), which is a new routing protocol stack. FRR is provided by the frr package available in the AppStream repository. FRR provides TCP/IP-based routing services with support for multiple IPv4 and IPv6 routing protocols, such as BGP , IS-IS , OSPF , PIM , and RIP . With FRR installed, the system can act as a dedicated router, which exchanges routing information with other routers in either internal or external network. (BZ#1657029) GNU enscript now supports ISO-8859-15 encoding With this update, support for ISO-8859-15 encoding has been added into the GNU enscript program. ( BZ#1664366 ) Improved accuracy of measuring system clock offset in phc2sys The phc2sys program from the linuxptp packages now supports a more accurate method for measuring the offset of the system clock. (BZ#1677217) ptp4l now supports team interfaces in active-backup mode With this update, support for team interfaces in active-backup mode has been added into the PTP Boundary/Ordinary Clock (ptp4l). (BZ#1685467) The PTP time synchronization on macvlan interfaces is now supported This update adds support for hardware timestamping on macvlan interfaces into the Linux kernel. As a result, macvlan interfaces can now use the Precision Time Protocol (PTP) for time synchronization. (BZ#1664359) 6.1.5. Security New package: fapolicyd The fapolicyd software framework introduces a form of application whitelisting and blacklisting based on a user-defined policy. The application whitelisting feature provides one of the most efficient ways to prevent running untrusted and possibly malicious applications on the system. The fapolicyd framework provides the following components: fapolicyd service fapolicyd command-line utilities yum plugin rule language Administrator can define the allow and deny execution rules, both with possibility of auditing, based on a path, hash, MIME type, or trust for any application. Note that every fapolicyd setup affects overall system performance. The performance hit varies depending on the use case. The application whitelisting slow-downs the open() and exec() system calls, and therefore primarily affects applications that perform such system calls frequently. See the fapolicyd(8) , fapolicyd.rules(5) , and fapolicyd.conf(5) man pages for more information. (BZ#1673323) New package: udica The new udica package provides a tool for generation SELinux policies for containers. With udica , you can create a tailored security policy for better control of how a container accesses host system resources, such as storage, devices, and network. This enables you to harden your container deployments against security violations and it also simplifies achieving and maintaining regulatory compliance. See the Creating SELinux policies for containers section in the RHEL 8 Using SELinux title for more information. (BZ#1673643) SELinux user-space tools updated to version 2.9 The libsepol , libselinux , libsemanage , policycoreutils , checkpolicy , and mcstrans SELinux user-space tools have been upgraded to the latest upstream release 2.9, which provides many bug fixes and enhancements over the version. ( BZ#1672638 , BZ#1672642 , BZ#1672637 , BZ#1672640 , BZ#1672635 , BZ#1672641 ) SETools updated to version 4.2.2 The SETools collection of tools and libraries has been upgraded to the latest upstream release 4.2.2, which provides the following changes: Removed source policy references from man pages, as loading source policies is no longer supported Fixed a performance regression in alias loading ( BZ#1672631 ) selinux-policy rebased to 3.14.3 The selinux-policy package has been upgraded to upstream version 3.14.3, which provides a number of bug fixes and enhancements to the allow rules over the version. ( BZ#1673107 ) A new SELinux type: boltd_t A new SELinux type, boltd_t , confines boltd , a system daemon for managing Thunderbolt 3 devices. As a result, boltd now runs as a confined service in SELinux enforcing mode. (BZ#1684103) A new SELinux policy class: bpf A new SELinux policy class, bpf , has been introduced. The bpf class enables users to control the Berkeley Packet Filter (BPF) flow through SElinux, and allows inspection and simple manipulation of Extended Berkeley Packet Filter (eBPF) programs and maps controlled by SELinux. (BZ#1673056) OpenSCAP rebased to version 1.3.1 The openscap packages have been upgraded to upstream version 1.3.1, which provides many bug fixes and enhancements over the version, most notably: Support for SCAP 1.3 source data streams: evaluating, XML schemas, and validation Tailoring files are included in ARF result files OVAL details are always shown in HTML reports, users do not have to provide the --oval-results option HTML report displays OVAL test details also for OVAL tests included from other OVAL definitions using the OVAL extend_definition element OVAL test IDs are shown in HTML reports Rule IDs are shown in HTML guides ( BZ#1718826 ) OpenSCAP now supports SCAP 1.3 The OpenSCAP suite now supports data streams conforming to the latest version of the SCAP standard - SCAP 1.3. You can now use SCAP 1.3 data streams, such as those contained in the scap-security-guide package, in the same way as SCAP 1.2 data streams without any additional usability restrictions. ( BZ#1709429 ) scap-security-guide rebased to version 0.1.46 The scap-security-guide packages have been upgraded to upstream version 0.1.46, which provides many bug fixes and enhancements over the version, most notably: * SCAP content conforms to the latest version of SCAP standard, SCAP 1.3 * SCAP content supports UBI images ( BZ#1718839 ) OpenSSH rebased to 8.0p1 The openssh packages have been upgraded to upstream version 8.0p1, which provides many bug fixes and enhancements over the version, most notably: Increased default RSA key size to 3072 bits for the ssh-keygen tool Removed support for the ShowPatchLevel configuration option Applied numerous GSSAPI key exchange code fixes, such as the fix of Kerberos cleanup procedures Removed fall back to the sshd_net_t SELinux context Added support for Match final blocks Fixed minor issues in the ssh-copy-id command Fixed Common Vulnerabilities and Exposures (CVE) related to the scp utility (CVE-2019-6111, CVE-2018-20685, CVE-2019-6109) Note, that this release introduces minor incompatibility of scp as mitigation of CVE-2019-6111. If your scripts depend on advanced bash expansions of the path during an scp download, you can use the -T switch to turn off these mitigations temporarily when connecting to trusted servers. ( BZ#1691045 ) libssh now complies with the system-wide crypto-policies The libssh client and server now automatically load the /etc/libssh/libssh_client.config file and the /etc/libssh/libssh_server.config , respectively. This configuration file includes the options set by the system-wide crypto-policies component for the libssh back end and the options set in the /etc/ssh/ssh_config or /etc/ssh/sshd_config OpenSSH configuration file. With automatic loading of the configuration file, libssh now use the system-wide cryptographic settings set by crypto-policies . This change simplifies control over the set of used cryptographic algorithms by applications. (BZ#1610883, BZ#1610884) An option for rsyslog to preserve case of FROMHOST is available This update to the rsyslog service introduces the option to manage letter case preservation of the FROMHOST property for the imudp and imtcp modules. Setting the preservecase value to on means the FROMHOST property is handled in a case sensitive manner. To avoid breaking existing configurations, the default values of preservecase are on for imtcp and off for imudp . (BZ#1614181) 6.1.6. Networking PMTU discovery and route redirection is now supported with VXLAN and GENEVE tunnels The kernel in Red Hat Enterprise Linux (RHEL) 8.0 did not handle Internet Control Message Protocol (ICMP) and ICMPv6 messages for Virtual Extensible LAN (VXLAN) and Generic Network Virtualization Encapsulation (GENEVE) tunnels. As a consequence, Path MTU (PMTU) discovery and route redirection was not supported with VXLAN and GENEVE tunnels in RHEL releases prior to 8.1. With this update, the kernel handles ICMP "Destination Unreachable" and "Redirect Message", as well as ICMPv6 "Packet Too Big" and "Destination Unreachable" error messages by adjusting the PMTU and modifying forwarding information. As a result, RHEL 8.1 supports PMTU discovery and route redirection with VXLAN and GENEVE tunnels. (BZ#1652222) Notable changes in XDP and networking eBPF features in kernel The XDP and the networking eBPF features in the kernel package have been upgraded to upstream version 5.0, which provides a number of bug fixes and enhancements over the version: eBPF programs can now better interact with the TCP/IP stack, perform flow dissection, have wider range of bpf helpers available, and have access to new map types. XDP metadata are now available to AF_XDP sockets. (BZ#1687459) The new PTP_SYS_OFFSET_EXTENDED control for ioctl() improves the accuracy of measured system-PHC ofsets This enhancement adds the PTP_SYS_OFFSET_EXTENDED control for more accurate measurements of the system precision time protocol (PTP) hardware clock (PHC) offset to the ioctl() function. The PTP_SYS_OFFSET control which, for example, the chrony service uses to measure the offset between a PHC and the system clock is not accurate enough. With the new PTP_SYS_OFFSET_EXTENDED control, drivers can isolate the reading of the lowest bits. This improves the accuracy of the measured offset. Network drivers typically read multiple PCI registers, and the driver does not read the lowest bits of the PHC time stamp between two readings of the system clock. (BZ#1677215) ipset rebased to version 7.1 The ipset packages have been upgraded to upstream version 7.1, which provides a number of bug fixes and enhancements over the version: The ipset protocol version 7 introduces the IPSET_CMD_GET_BYNAME and IPSET_CMD_GET_BYINDEX operations. Additionally, the user space component can now detect the exact compatibility level that the kernel component supports. A significant number of bugs have been fixed, such as memory leaks and use-after-free bugs. (BZ#1649090) 6.1.7. Kernel Kernel version in RHEL 8.1 Red Hat Enterprise Linux 8.1 is distributed with the kernel version 4.18.0-147. (BZ#1797671) Live patching for the kernel is now available Live patching for the kernel, kpatch , provides a mechanism to patch the running kernel without rebooting or restarting any processes. Live kernel patches will be provided for selected minor release streams of RHEL covered under the Extended Update Support (EUS) policy to remediate Critical and Important CVEs. To subscribe to the kpatch stream for the RHEL 8.1 version of the kernel, install the kpatch-patch-4_18_0-147 package provided by the RHEA-2019:3695 advisory. For more information, see Applying patches with kernel live patching in Managing, monitoring and updating the kernel. (BZ#1763780) Extended Berkeley Packet Filter in RHEL 8 Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions. The virtual machine executes special assembly-like code. The code is then loaded to the kernel and translated to the native machine code with just-in-time compilation. There are numerous components shipped by Red Hat that utilize the eBPF virtual machine. Each component is in a different development phase, and thus not all components are currently fully supported. In RHEL 8.1, the BPF Compiler Collection (BCC) tools package is fully supported on the AMD and Intel 64-bit architectures. The BCC tools package is a collection of dynamic kernel tracing utilities that use the eBPF virtual machine. The following eBPF components are currently available as a Technology Preview: The BCC tools package on the following architectures: the 64-bit ARM architecture, IBM Power Systems, Little Endian, and IBM Z The BCC library on all architectures The bpftrace tracing language The eXpress Data Path (XDP) feature For details regarding the Technology Preview components, see Section 6.5.2, "Kernel" . (BZ#1780124) Red Hat Enterprise Linux 8 now supports early kdump The early kdump feature allows the crash kernel and initramfs to load early enough to capture the vmcore information even for early crashes. For more details about early kdump , see the /usr/share/doc/kexec-tools/early-kdump-howto.txt file. (BZ#1520209) RHEL 8 now supports ipcmni_extend A new kernel command line parameter ipcmni_extend has been added to Red Hat Enterprise Linux 8. The parameter extends a number of unique System V Inter-process Communication (IPC) identifiers from the current maximum of 32 KB (15 bits) up to 16 MB (24 bits). As a result, users whose applications produce a lot of shared memory segments are able to create a stronger IPC identifier without exceeding the 32 KB limit. Note that in some cases using ipcmni_extend results in a small performance overhead and it should be used only if the applications need more than 32 KB of unique IPC identifier. (BZ#1710480) The persistent memory initialization code supports parallel initialization The persistent memory initialization code enables parallel initialization on systems with multiple nodes of persistent memory. The parallel initialization greatly reduces the overall memory initialization time on systems with large amounts of persistent memory. As a result, these systems can now boot much faster. (BZ#1634343) TPM userspace tool has been updated to the last version The tpm2-tools userspace tool has been updated to version 2.0. With this update, tpm2-tools is able to fix many defects. ( BZ#1664498 ) The rngd daemon is now able to run with non-root privileges The random number generator daemon ( rngd ) checks whether data supplied by the source of randomness is sufficiently random and then stores the data in the kernel's random-number entropy pool. With this update, rngd is able to run with non-root user privileges to enhance system security. ( BZ#1692435 ) Full support for the ibmvnic driver With the introduction of Red Hat Enterprise Linux 8.0, the IBM Virtual Network Interface Controller (vNIC) driver for IBM POWER architectures, ibmvnic , was available as a Technology Preview. vNIC is a PowerVM virtual networking technology that delivers enterprise capabilities and simplifies network management. It is a high-performance, efficient technology that when combined with SR-IOV NIC provides bandwidth control Quality of Service (QoS) capabilities at the virtual NIC level. vNIC significantly reduces virtualization overhead, resulting in lower latencies and fewer server resources, including CPU and memory, required for network virtualization. Starting with Red Hat Enterprise Linux 8.1 the ibmvnic device driver is fully supported on IBM POWER9 systems. (BZ#1665717) Intel (R) Omni-Path Architecture (OPA) Host Software Intel Omni-Path Architecture (OPA) host software is fully supported in Red Hat Enterprise Linux 8.1. Intel OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment. ( BZ#1766186 ) UBSan has been enabled in the debug kernel in RHEL 8 The Undefined Behavior Sanitizer ( UBSan ) utility exposes undefined behavior flaws in C code languages at runtime. This utility has now been enabled in the debug kernel because the compiler behavior was, in some cases, different than developers' expectations. Especially, in the case of compiler optimization, where subtle, obscure bugs would appear. As a result, running the debug kernel with UBSan enabled allows the system to easily detect such bugs. (BZ#1571628) The fadump infrastructure now supports re-registering in RHEL 8 The support has been added for re-registering (unregistering and registering) of the firmware-assisted dump ( fadump ) infrastructure after any memory hot add/remove operation to update the crash memory ranges. The feature aims to prevent the system from potential racing issues during unregistering and registering fadump from userspace during udev events. (BZ#1710288) The determine_maximum_mpps.sh script has been introduced in RHEL for Real Time 8 The determine_maximum_mpps.sh script has been introduced to help use the queuelat test program. The script executes queuelat to determine the maximum packets per second a machine can handle. ( BZ#1686494 ) kernel-rt source tree now matches the latest RHEL 8 tree The kernel-rt sources have been upgraded to be based on the latest Red Hat Enterprise Linux kernel source tree, which provides a number of bug fixes and enhancements over the version. ( BZ#1678887 ) The ssdd test has been added to RHEL for Real Time 8 The ssdd test has been added to enable stress testing of the tracing subsystem. The test runs multiple tracing threads to verify locking is correct within the tracing system. ( BZ#1666351 ) 6.1.8. Hardware enablement Memory Mode for Optane DC Persistent Memory technology is fully supported Intel Optane DC Persistent Memory storage devices provide data center-class persistent memory technology, which can significantly increase transaction throughput. To use the Memory Mode technology, your system does not require any special drivers or specific certification. Memory Mode is transparent to the operating system. ( BZ#1718422 ) IBM Z now supports system boot signature verification Secure Boot allows the system firmware to check the authenticity of cryptographic keys that were used to sign the kernel space code. As a result,the feature improves security since only code from trusted vendors can be executed. Note that IBM z15 is required to use Secure Boot. (BZ#1659399) 6.1.9. File systems and storage Support for Data Integrity Field/Data Integrity Extension (DIF/DIX) DIF/DIX is supported on configurations where the hardware vendor has qualified it and provides full support for the particular host bus adapter (HBA) and storage array configuration on RHEL. DIF/DIX is not supported on the following configurations: It is not supported for use on the boot device. It is not supported on virtualized guests. Red Hat does not support using the Automatic Storage Management library (ASMLib) when DIF/DIX is enabled. DIF/DIX is enabled or disabled at the storage device, which involves various layers up to (and including) the application. The method for activating the DIF on storage devices is device-dependent. For further information on the DIF/DIX feature, see What is DIF/DIX . (BZ#1649493) Optane DC memory systems now supports EDAC reports Previously, EDAC was not reporting memory corrected/uncorrected events if the memory address was within a NVDIMM module. With this update, EDAC can properly report the events with the correct memory module information. (BZ#1571534) The VDO Ansible module has been moved to Ansible packages Previously, the VDO Ansible module was provided by the vdo RPM package. Starting with this release, the module is provided by the ansible package instead. The original location of the VDO Ansible module file was: The new location of the file is: The vdo package continues to distribute Ansible playbooks. For more information on Ansible, see http://docs.ansible.com/ . ( BZ#1669534 ) Aero adapters are now fully supported The following Aero adapters, previously available as a Technology Preview, are now fully supported: PCI ID 0x1000:0x00e2 and 0x1000:0x00e6, controlled by the mpt3sas driver PCI ID 0x1000:Ox10e5 and 0x1000:0x10e6, controlled by the megaraid_sas driver (BZ#1663281) LUKS2 now supports online re-encryption The Linux Unified Key Setup version 2 (LUKS2) format now supports re-encrypting encrypted devices while the devices are in use. For example, you do not have to unmount the file system on the device to perform the following tasks: Change the volume key Change the encryption algorithm When encrypting a non-encrypted device, you must still unmount the file system, but the encryption is now significantly faster. You can remount the file system after a short initialization of the encryption. Additionally, the LUKS2 re-encryption is now more resilient. You can select between several options that prioritize performance or data protection during the re-encryption process. To perform the LUKS2 re-encryption, use the cryptsetup reencrypt subcommand. Red Hat no longer recommends using the cryptsetup-reencrypt utility for the LUKS2 format. Note that the LUKS1 format does not support online re-encryption, and the cryptsetup reencrypt subcommand is not compatible with LUKS1. To encrypt or re-encrypt a LUKS1 device, use the cryptsetup-reencrypt utility. For more information on disk encryption, see Encrypting block devices using LUKS . ( BZ#1676622 ) New features of ext4 available in RHEL 8 In RHEL8, following are the new fully supported features of ext4: Non-default features: project quota mmp Non-default mount options: bsddf|minixdf grpid|bsdgroups and nogrpid|sysvgroups resgid=n and resuid=n errors={continue|remount-ro|panic} commit=nrsec max_batch_time=usec min_batch_time=usec grpquota|noquota|quota|usrquota prjquota dax lazytime|nolazytime discard|nodiscard init_itable|noinit_itable jqfmt={vfsold|vfsv0|vfsv1} usrjquota=aquota.user|grpjquota=aquota.group For more information on features and mount options, see the ext4 man page. Other ext4 features, mount options or both, or combination of features, mount options or both may not be fully supported by Red Hat. If your special workload requires a feature or mount option that is not fully supported in the Red Hat release, contact Red Hat support to evaluate it for inclusion in our supported list. (BZ#1741531) NVMe over RDMA now supports an Infiniband in the target mode for IBM Coral systems In RHEL 8.1, NVMe over RDMA now supports an Infiniband in the target mode for IBM Coral systems, with a single NVMe PCIe add in card as the target. ( BZ#1721683 ) 6.1.10. High availability and clusters Pacemaker now defaults the concurrent-fencing cluster property to true If multiple cluster nodes need to be fenced at the same time, and they use different configured fence devices, Pacemaker will now execute the fencing simultaneously, rather than serialized as before. This can result in greatly sped up recovery in a large cluster when multiple nodes must be fenced. ( BZ#1715426 ) Extending a shared logical volume no longer requires a refresh on every cluster node With this release, extending a shared logical volume no longer requires a refresh on every cluster node after running the lvextend command on one cluster node. For the full procedure to extend the size of a GFS2 file system, see Growing a GFS2 file system . (BZ#1649086) Maximum size of a supported RHEL HA cluster increased from 16 to 32 nodes With this release, Red Hat supports cluster deployments of up to 32 full cluster nodes. (BZ#1693491) Commands for adding, changing, and removing corosync links have been added to pcs The Kronosnet (knet) protocol now allows you to add and remove knet links in running clusters. To support this feature, the pcs command now provides commands to add, change, and remove knet links and to change a upd/udpu link in an existing cluster. For information on adding and modifying links in an existing cluster, see Adding and modifying links in an existing cluster . (BZ#1667058) 6.1.11. Dynamic programming languages, web and database servers A new module stream: php:7.3 RHEL 8.1 introduces PHP 7.3 , which provides a number of new features and enhancements. Notable changes include: Enhanced and more flexible heredoc and nowdoc syntaxes The PCRE extension upgraded to PCRE2 Improved multibyte string handling Support for LDAP controls Improved FastCGI Process Manager (FPM) logging Several deprecations and backward incompatible changes For more information, see Migrating from PHP 7.2.x to PHP 7.3.x . Note that the RHEL 8 version of PHP 7.3 does not support the Argon2 password hashing algorithm. To install the php:7.3 stream, use: If you want to upgrade from the php:7.2 stream, see Switching to a later stream . ( BZ#1653109 ) A new module stream: ruby:2.6 A new module stream, ruby:2.6 , is now available. Ruby 2.6.3 , included in RHEL 8.1, provides numerous new features, enhancements, bug and security fixes, and performance improvements over version 2.5 distributed in RHEL 8.0. Notable enhancements include: Constant names are now allowed to begin with a non-ASCII capital letter. Support for an endless range has been added. A new Binding#source_location method has been provided. USDSAFE is now a process global state and it can be set back to 0 . The following performance improvements have been implemented: The Proc#call and block.call processes have been optimized. A new garbage collector managed heap, Transient heap ( theap ), has been introduced. Native implementations of coroutines for individual architectures have been introduced. In addition, Ruby 2.5 , provided by the ruby:2.5 stream, has been upgraded to version 2.5.5, which provides a number of bug and security fixes. To install the ruby:2.6 stream, use: If you want to upgrade from the ruby:2.5 stream, see Switching to a later stream . (BZ#1672575) A new module stream: nodejs:12 RHEL 8.1 introduces Node.js 12 , which provides a number of new features and enhancements over version 10. Notable changes include: The V8 engine upgraded to version 7.4 A new default HTTP parser, llhttp (no longer experimental) Integrated capability of heap dump generation Support for ECMAScript 2015 (ES6) modules Improved support for native modules Worker threads no longer require a flag A new experimental diagnostic report feature Improved performance To install the nodejs:12 stream, use: If you want to upgrade from the nodejs:10 stream, see Switching to a later stream . (BZ#1685191) Judy-devel available in CRB The Judy-devel package is now available as a part of the mariadb-devel:10.3 module in the CodeReady Linux Builder repository (CRB) . As a result, developers are now able to build applications with the Judy library. To install the Judy-devel package, enable the mariadb-devel:10.3 module first: (BZ#1657053) FIPS compliance in Python 3 This update adds support for OpenSSL FIPS mode to Python 3 . Namely: In FIPS mode, the blake2 , sha3 , and shake hashes use the OpenSSL wrappers and do not offer extended functionality (such as keys, tree hashing, or custom digest size). In FIPS mode, the hmac.HMAC class can be instantiated only with an OpenSSL wrapper or a string with OpenSSL hash name as the digestmod argument. The argument must be specified (instead of defaulting to the md5 algorithm). Note that hash functions support the usedforsecurity argument, which allows using insecure hashes in OpenSSL FIPS mode. The user is responsible for ensuring compliance with any relevant standards. ( BZ#1731424 ) FIPS compliance changes in python3-wheel This update of the python3-wheel package removes a built-in implementation for signing and verifying data that is not compliant with FIPS. (BZ#1731526) A new module stream: nginx:1.16 The nginx 1.16 web and proxy server, which provides a number of new features and enhancements over version 1.14, is now available. For example: Numerous updates related to SSL (loading of SSL certificates and secret keys from variables, variable support in the ssl_certificate and ssl_certificate_key directives, a new ssl_early_data directive) New keepalive -related directives A new random directive for distributed load balancing New parameters and improvements to existing directives (port ranges for the listen directive, a new delay parameter for the limit_req directive, which enables two-stage rate limiting) A new USDupstream_bytes_sent variable Improvements to User Datagram Protocol (UDP) proxying Other notable changes include: In the nginx:1.16 stream, the nginx package does not require the nginx-all-modules package, therefore nginx modules must be installed explicitly. When you install nginx as module, the nginx-all-modules package is installed as a part of the common profile, which is the default profile. The ssl directive has been deprecated; use the ssl parameter for the listen directive instead. nginx now detects missing SSL certificates during configuration testing. When using a host name in the listen directive, nginx now creates listening sockets for all addresses that the host name resolves to. To install the nginx:1.16 stream, use: If you want to upgrade from the nginx:1.14 stream, see Switching to a later stream . (BZ#1690292) perl-IO-Socket-SSL rebased to version 2.066 The perl-IO-Socket-SSL package has been upgraded to version 2.066, which provides a number of bug fixes and enhancements over the version, for example: Improved support for TLS 1.3, notably a session reuse and an automatic post-handshake authentication on the client side Added support for multiple curves, automatic setting of curves, partial trust chains, and support for RSA and ECDSA certificates on the same domain (BZ#1632600) perl-Net-SSLeay rebased to version 1.88 The perl-Net-SSLeay package has been upgraded to version 1.88, which provides multiple bug fixes and enhancements. Notable changes include: Improved compatibility with OpenSSL 1.1.1, such as manipulating a stack of certificates and X509 stores, and selecting elliptic curves and groups Improved compatibility with TLS 1.3, for example, a session reuse and a post-handshake authentication Fixed memory leak in the cb_data_advanced_put() subroutine. (BZ#1632597) 6.1.12. Compilers and development tools GCC Toolset 9 available Red Hat Enterprise Linux 8.1 introduces GCC Toolset 9, an Application Stream containing more up-to-date versions of development tools. The following tools and versions are provided by GCC Toolset 9: Tool Version GCC 9.1.1 GDB 8.3 Valgrind 3.15.0 SystemTap 4.1 Dyninst 10.1.0 binutils 2.32 elfutils 0.176 dwz 0.12 make 4.2.1 strace 5.1 ltrace 0.7.91 annobin 8.79 GCC Toolset 9 is available as an Application Stream in the form of a Software Collection in the AppStream repository. GCC Toolset is a set of tools similar to Red Hat Developer Toolset for RHEL 7. To install GCC Toolset 9: To run a tool from GCC Toolset 9: To run a shell session where tool versions from GCC Toolset 9 take precedence over system versions of these tools: For detailed instructions regarding usage, see Using GCC Toolset . (BZ#1685482) Upgraded compiler toolsets The following compiler toolsets, distributed as Application Streams, have been upgraded with RHEL 8.1: Clang and LLVM Toolset, which provides the LLVM compiler infrastructure framework, the Clang compiler for the C and C++ languages, the LLDB debugger, and related tools for code analysis, to version 8.0.1 Rust Toolset, which provides the Rust programming language compiler rustc , the cargo build tool and dependency manager, and required libraries, to version 1.37 Go Toolset, which provides the Go ( golang ) programming language tools and libraries, to version 1.12.8. (BZ#1731502, BZ#1691975, BZ#1680091, BZ#1677819, BZ#1681643) SystemTap rebased to version 4.1 The SystemTap instrumentation tool has been updated to upstream version 4.1. Notable improvements include: The eBPF runtime backend can handle more features of the scripting language such as string variables and rich formatted printing. Performance of the translator has been significantly improved. More types of data in optimized C code can now be extracted with DWARF4 debuginfo constructs. ( BZ#1675740 ) General availability of the DHAT tool Red Hat Enterprise Linux 8.1 introduces the general availability of the DHAT tool. It is based on the valgrind tool version 3.15.0. You can find changes/improvements in valgrind tool functionality below: use --tool=dhat instead of --tool=exp-dhat , --show-top-n and --sort-by options have been removed because dhat tool now prints the minimal data after the program ends, a new viewer dh_view.html , which is a JavaScript programm, contains the profile results. A short message explains how to view the results after the run is ended, the documentation for a viewer is located: /usr/libexec/valgrind/dh_view.html , the documentation for the DHAT tool is located: /usr/share/doc/valgrind/html/dh-manual.html , the support for amd64 (x86_64): the RDRAND and F16C insn set extensions is added, in cachegrind the cg_annotate command has a new option, --show-percs , which prints percentages to all event counts, in callgrind the callgrind_annotate command has a new option, --show-percs , which prints percentages to all event counts, in massif the default value for --read-inline-info is now yes , in memcheck option --xtree-leak=yes , which outputs leak result in xtree format, automatically activates the option --show-leak-kinds=all , the new option --show-error-list=no|yes displays the list of the detected errors and the used suppression at the end of the run. Previously, the user could specify the option -v for valgrind command, which shows a lot of information that might be confusing. The option -s is an equivalent to the option --show-error-list=yes . (BZ#1683715) elfutils rebased to version 0.176 The elfutils packages have been updated to upstream version 0.176. This version brings various bug fixes, and resolves the following vulnerabilities: CVE-2019-7146 CVE-2019-7149 CVE-2019-7150 CVE-2019-7664 CVE-2019-7665 Notable improvements include: The libdw library has been extended with the dwelf_elf_begin() function which is a variant of elf_begin() that handles compressed files. A new --reloc-debug-sections-only option has been added to the eu-strip tool to resolve all trivial relocations between debug sections in place without any other stripping. This functionality is relevant only for ET_REL files in certain circumstances. (BZ#1683705) Additional memory allocation checks in glibc Application memory corruption is a leading cause of application and security defects. Early detection of such corruption, balanced against the cost of detection, can provide significant benefits to application developers. To improve detection, six additional memory corruption checks have been added to the malloc metadata in the GNU C Library ( glibc ), which is the core C library in RHEL. These additional checks have been added at a very low cost to runtime performance. (BZ#1651283) GDB can access more POWER8 registers With this update, the GNU debugger (GDB) and its remote stub gdbserver can access the following additional registers and register sets of the POWER8 processor line of IBM: PPR DSCR TAR EBB/PMU HTM (BZ#1187581) binutils disassembler can handle NFP binary files The disassembler tool from the binutils package has been extended to handle binary files for the Netronome Flow Processor (NFP) hardware series. This functionality is required to enable further features in the bpftool Berkeley Packet Filter (BPF) code compiler. (BZ#1644391) Partially writable GOT sections are now supported on the IBM Z architecture The IBM Z binaries using the "lazy binding" feature of the loader can now be hardened by generating partially writable Global offset table (GOT) sections. These binaries require a read-write GOT, but not all entries to be writable. This update provides protection for the entries from potential attacks. (BZ#1525406) binutils now supports Arch13 processors of IBM Z This update adds support for the extensions related to the Arch13 processors into the binutils packages on IBM Z architecture. As a result, it is now possible to build kernels that can use features available in arch13-enabled CPUs on IBM Z. (BZ#1659437) Dyninst rebased to version 10.1.0 The Dyninst instrumentation library has been updated to upstream version 10.1.0. Notable changes include: Dyninst supports the Linux PowerPC Little Endian ( ppcle ) and 64-bit ARM ( aarch64 ) architectures. Start-up time has been improved by using parallel code analysis. (BZ#1648441) Date formatting updates for the Japanese Reiwa era The GNU C Library now provides correct Japanese era name formatting for the Reiwa era starting on May 1st, 2019. The time handling API data has been updated, including the data used by the strftime and strptime functions. All APIs will correctly print the Reiwa era including when strftime is used along with one of the era conversion specifiers such as %EC , %EY , or %Ey . (BZ#1577438) Performance Co-Pilot rebased to version 4.3.2 In RHEL 8.1, the Performance Co-Pilot (PCP) tool has been updated to upstream version 4.3.2. Notable improvements include: New metrics have been added - Linux kernel entropy, pressure stall information, Nvidia GPU statistics, and more. Tools such as pcp-dstat , pcp-atop , the perfevent PMDA, and others have been updated to report the new metrics. The pmseries and pmproxy utilities for a performant PCP integration with Grafana have been updated. This release is backward compatible for libraries, over-the-wire protocol and on-disk PCP archive format. ( BZ#1685302 ) 6.1.13. Identity Management IdM now supports Ansible roles and modules for installation and management This update introduces the ansible-freeipa package, which provides Ansible roles and modules for Identity Management (IdM) deployment and management. You can use Ansible roles to install and uninstall IdM servers, replicas, and clients. You can use Ansible modules to manage IdM groups, topology, and users. There are also example playbooks available. This update simplifies the installation and configuration of IdM based solutions. (JIRA:RHELPLAN-2542) New tool to test the overall fitness of IdM deployment: Healthcheck This update introduces the Healthcheck tool in Identity Management (IdM). The tool provides tests verifying that the current IdM server is configured and running correctly. The major areas currently covered are: * Certificate configuration and expiration dates * Replication errors * Replication topology * AD Trust configuration * Service status * File permissions of important configuration files * Filesystem space The Healthcheck tool is available in the command-line interface (CLI). (JIRA:RHELPLAN-13066) IdM now supports renewing expired system certificates when the server is offline With this enhancement, administrators can renew expired system certificates when Identity Management (IdM) is offline. When a system certificate expires, IdM fails to start. The new ipa-cert-fix command replaces the workaround to manually set the date back to proceed with the renewal process. As a result, the downtime and support costs reduce in the mentioned scenario. (JIRA:RHELPLAN-13074) Identity Management supports trust with Windows Server 2019 When using Identity Management, you can now establish a supported forest trust to Active Directory forests that run by Windows Server 2019. The supported forest and domain functional levels are unchanged and supported up to level Windows Server 2016. (JIRA:RHELPLAN-15036) samba rebased to version 4.10.4 The samba packages have been upgraded to upstream version 4.10.4, which provides a number of bug fixes and enhancements over the version: Samba 4.10 fully supports Python 3. Note that future Samba versions will not have any runtime support for Python 2. The JavaScript Object Notation (JSON) logging feature now logs the Windows event ID and logon type for authentication messages. The new vfs_glusterfs_fuse file system in user space (FUSE) module improves the performance when Samba accesses a GlusterFS volume. To enable this module, add glusterfs_fuse to the vfs_objects parameter of the share in the /etc/samba/smb.conf file. Note that vfs_glusterfs_fuse does not replace the existing vfs_glusterfs module. The server message block (SMB) client Python bindings are now deprecated and will be removed in a future Samba release. This only affects users who use the Samba Python bindings to write their own utilities. Samba automatically updates its tdb database files when the smbd , nmbd , or winbind service starts. Back up the databases files before starting Samba. Note that Red Hat does not support downgrading tdb database files. For further information about notable changes, read the upstream release notes before updating: https://www.samba.org/samba/history/samba-4.10.0.html (BZ#1638001) Updated system-wide certificate store location for OpenLDAP The default location for trusted CAs for OpenLDAP has been updated to use the system-wide certificate store ( /etc/pki/ca-trust/source ) instead of /etc/openldap/certs . This change has been made to simplify the setting up of CA trust. No additional setup is required to set up CA trust, unless you have service-specific requirements. For example, if you require an LDAP server's certificate to be only trusted for LDAP client connections, in this case you must set up the CA certificates as you did previously. (JIRA:RHELPLAN-7109) New ipa-crl-generation commands have been introduced to simplify managing IdM CRL master This update introduces the ipa-crl-generation status/enable/disable commands. These commands, run by the root user, simplify work with the Certificate Revocation List (CRL) in IdM. Previously, moving the CRL generation master from one IdM CA server to another was a lengthy, manual and error-prone procedure. The ipa-crl-generation status command checks if the current host is the CRL generation master. The ipa-crl-generation enable command makes the current host the CRL generation master in IdM if the current host is an IdM CA server. The ipa-crl-generation disable command stops CRL generation on the current host. Additionally, the ipa-server-install --uninstall command now includes a safeguard checking whether the host is the CRL generation master. This way, IdM ensures that the system administrator does not remove the CRL generation master from the topology. (JIRA:RHELPLAN-13068) OpenID Connect support in keycloak-httpd-client-install The keycloak-httpd-client-install identity provider previously supported only the SAML (Security Assertion Markup Language) authentication with the mod_auth_mellon authentication module. This rebase introduces the mod_auth_openidc authentication module support, which allows you to configure also the OpenID Connect authentication. The keycloak-httpd-client-install identity provider allows an apache instance to be configured as an OpenID Connect client by configuring mod_auth_openidc . (BZ#1553890) Setting up IdM as a hidden replica is now available as a Technology Preview This enhancement enables administrators to set up an Identity Management (IdM) replica as a hidden replica. A hidden replica is an IdM server that has all services running and available. However, it is not advertised to other clients or masters because no SRV records exist for the services in DNS, and LDAP server roles are not enabled. Therefore, clients cannot use service discovery to detect hidden replicas. Hidden replicas are primarily designed for dedicated services that can otherwise disrupt clients. For example, a full backup of IdM requires to shut down all IdM services on the master or replica. Since no clients use a hidden replica, administrators can temporarily shut down the services on this host without affecting any clients. Other use cases include high-load operations on the IdM API or the LDAP server, such as a mass import or extensive queries. To install a new hidden replica, use the ipa-replica-install --hidden-replica command. To change the state of an existing replica, use the ipa server-state command. ( BZ#1719767 ) SSSD now enforces AD GPOs by default The default setting for the SSSD option ad_gpo_access_control is now enforcing . In RHEL 8, SSSD enforces access control rules based on Active Directory Group Policy Objects (GPOs) by default. Red Hat recommends ensuring GPOs are configured correctly in Active Directory before upgrading from RHEL 7 to RHEL 8. If you would not like to enforce GPOs, change the value of the ad_gpo_access_control option in the /etc/sssd/sssd.conf file to permissive . (JIRA:RHELPLAN-51289) 6.1.14. Desktop Modified workspace switcher in GNOME Classic Workspace switcher in the GNOME Classic environment has been modified. The switcher is now located in the right part of the bottom bar, and it is designed as a horizontal strip of thumbnails. Switching between workspaces is possible by clicking on the required thumbnail. Alternatively, you can also use the combination of Ctrl + Alt + down/up arrow keys to switch between workspaces. The content of the active workspace is shown in the left part of the bottom bar in form of the window list . When you press the Super key within the particular workspace, you can see the window picker , which includes all windows that are open in this workspace. However, the window picker no longer displays the following elements that were available in the release of RHEL: dock (vertical bar on the left side of the screen) workspace switcher (vertical bar on the right side of the screen) search entry For particular tasks that were previously achieved with the help of these elements, adopt the following approaches: To launch applications, instead of using dock , you can: Use the Applications menu on the top bar Press the kdb:[Alt + F2] keys to make the Enter a Command screen appear, and write the name of the executable into this screen. To switch between workspaces, instead of using the vertical workspace switcher , use the horizontal workspace switcher in the right bottom bar. If you require the search entry or the vertical workspace switcher , use GNOME Standard environment instead of GNOME Classic. ( BZ#1704360 ) 6.1.15. Graphics infrastructures DRM rebased to Linux kernel version 5.1 The Direct Rendering Manager (DRM) kernel graphics subsystem has been rebased to upstream Linux kernel version 5.1, which provides a number of bug fixes and enhancements over the version. Most notably: The mgag200 driver has been updated. The driver continues providing support for HPE Proliant Gen10 Systems, which use Matrox G200 eH3 GPUs. The updated driver also supports current and new Dell EMC PowerEdge Servers. The nouveau driver has been updated to provide hardware enablement to current and future Lenovo platforms that use NVIDIA GPUs. The i915 display driver has been updated for continued support of current and new Intel GPUs. Bug fixes for Aspeed AST BMC display chips have been added. Support for AMD Raven 2 set of Accelerated Processing Units (APUs) has been added. Support for AMD Picasso APUs has been added. Support for AMD Vega GPUs has been added. Support for Intel Amber Lake-Y and Intel Comet Lake-U GPUs has been added. (BZ#1685552) Support for AMD Picasso graphic cards This update introduces the amdgpu graphics driver. As a result AMD Picasso graphics cards are now fully supported on RHEL 8. (BZ#1685427) 6.1.16. The web console Enabling and disabling SMT Simultaneous Multi-Threading (SMT) configuration is now available in RHEL 8. Disabling SMT in the web console allows you to mitigate a class of CPU security vulnerabilities such as: Microarchitectural Data Sampling L1 Terminal Fault Attack ( BZ#1678956 ) Adding a search box in the Services page The Services page now has a search box for filtering services by: Name Description State In addition, service states have been merged into one list. The switcher buttons at the top of the page have also been changed to tabs to improve user experience of the Services page. ( BZ#1657752 ) Adding support for firewall zones The firewall settings on the Networking page now supports: Adding and removing zones Adding or removing services to arbitrary zones and Configuring custom ports in addition to firewalld services. ( BZ#1678473 ) Adding improvements to Virtual Machines configuration With this update, the RHEL 8 web console includes a lot of improvements in the Virtual Machines page. You can now: Manage various types of storage pools Configure VM autostart Import existing qcow images Install VMs through PXE boot Change memory allocation Pause/resume VMs Configure cache characteristics (directsync, writeback) Change the boot order ( BZ#1658847 ) 6.1.17. Red Hat Enterprise Linux system roles A new storage role added to RHEL system roles The storage role has been added to RHEL system roles provided by the rhel-system-roles package. The storage role can be used to manage local storage using Ansible. Currently, the storage role supports the following types of tasks: Managing file systems on whole disks Managing LVM volume groups Managing logical volumes and their file systems For more information, see Managing file systems and Configuring and managing logical volumes . (BZ#1691966) 6.1.18. Virtualization WALinuxAgent rebased to version 2.2.38 The WALinuxAgent package has been upgraded to upstream version 2.2.38, which provides a number of bug fixes and enhancements over the version. In addition, WALinuxAgent is no longer compatible with Python 2, and applications dependant on Python 2. As a result, applications and extensions written in Python 2 will need to be converted to Python 3 to establish compatibility with WALinuxAgent . ( BZ#1722848 ) Windows automatically finds the needed virtio-win drivers Windows can now automatically find the virtio-win drivers it needs from the driver ISO without requiring the user to select the folder in which they are located. ( BZ#1223668 ) KVM supports 5-level paging With Red Hat Enterprise Linux 8, KVM virtualization supports the 5-level paging feature. On selected host CPUs, this significantly increases the physical and virtual address space that the host and guest systems can use. (BZ#1526548) Smart card sharing is now supported on Windows guests with ActivClient drivers This update adds support for smart card sharing in virtual machines (VMs) that use a Windows guest OS and ActivClient drivers. This enables smart card authentication for user logins using emulated or shared smart cards on these VMs. (BZ#1615840) New options have been added for virt-xml The virt-xml utility can now use the following command-line options: --no-define - Changes done to the virtual machine (VM) by the virt-xml command are not saved into persistent configuration. --start - Starts the VM after performing requested changes. Using these two options together allows users to change the configuration of a VM and start the VM with the new configuration without making the changes persistent. For example, the following command changes the boot order of the testguest VM to network for the boot, and initiates the boot: (JIRA:RHELPLAN-13960) IBM z14 GA2 CPUs supported by KVM With this update, KVM supports the IBM z14 GA2 CPU model. This makes it possible to create virtual machines on IBM z14 GA2 hosts that use RHEL 8 as the host OS with an IBM z14 GA2 CPU in the guest. (JIRA:RHELPLAN-13649) Nvidia NVLink2 is now compatible with virtual machines on IBM POWER9 Nvidia VGPUs that support the NVLink2 feature can now be assigned to virtual machines (VMs) running in a RHEL 8 host on an IBM POWER9 system. This makes it possible for these VMs to use the full performance potential of NVLink2. (JIRA:RHELPLAN-12811) 6.2. New Drivers Network Drivers Serial Line Internet Protocol support (slip.ko.xz) Platform CAN bus driver for Bosch C_CAN controller (c_can_platform.ko.xz) virtual CAN interface (vcan.ko.xz) Softing DPRAM CAN driver (softing.ko.xz) serial line CAN interface (slcan.ko.xz) CAN driver for EMS Dr. Thomas Wuensche CAN/USB interfaces (ems_usb.ko.xz) CAN driver for esd CAN-USB/2 and CAN-USB/Micro interfaces (esd_usb2.ko.xz) Socket-CAN driver for SJA1000 on the platform bus (sja1000_platform.ko.xz) Socket-CAN driver for PLX90xx PCI-bridge cards with the SJA1000 chips (plx_pci.ko.xz) Socket-CAN driver for EMS CPC-PCI/PCIe/104P CAN cards (ems_pci.ko.xz) Socket-CAN driver for KVASER PCAN PCI cards (kvaser_pci.ko.xz) Intel(R) 2.5G Ethernet Linux Driver (igc.ko.xz) Realtek 802.11ac wireless PCI driver (rtwpci.ko.xz) Realtek 802.11ac wireless core module (rtw88.ko.xz) MediaTek MT76 devices support (mt76.ko.xz) MediaTek MT76x0U (USB) support (mt76x0u.ko.xz) MediaTek MT76x2U (USB) support (mt76x2u.ko.xz) Graphics Drivers and Miscellaneous Drivers Virtual Kernel Mode Setting (vkms.ko.xz) Intel GTT (Graphics Translation Table) routines (intel-gtt.ko.xz) Xen frontend/backend page directory based shared buffer handling (xen-front-pgdir-shbuf.ko.xz) LED trigger for audio mute control (ledtrig-audio.ko.xz) Host Wireless Adapter Radio Control Driver (hwa-rc.ko.xz) Network Block Device (nbd.ko.xz) Pericom PI3USB30532 Type-C mux driver (pi3usb30532.ko.xz) Fairchild FUSB302 Type-C Chip Driver (fusb302.ko.xz) TI TPS6598x USB Power Delivery Controller Driver (tps6598x.ko.xz) Intel PCH Thermal driver (intel_pch_thermal.ko.xz) PCIe AER software error injector (aer_inject.ko.xz) Simple stub driver for PCI SR-IOV PF device (pci-pf-stub.ko.xz) mISDN Digital Audio Processing support (mISDN_dsp.ko.xz) ISDN layer 1 for Cologne Chip HFC-4S/8S chips (hfc4s8s_l1.ko.xz) ISDN4Linux: Call diversion support (dss1_divert.ko.xz) CAPI4Linux: Userspace /dev/capi20 interface (capi.ko.xz) USB Driver for Gigaset 307x (bas_gigaset.ko.xz) ISDN4Linux: Driver for HYSDN cards (hysdn.ko.xz) mISDN Digital Audio Processing support (mISDN_dsp.ko.xz) mISDN driver for Winbond w6692 based cards (w6692.ko.xz) mISDN driver for CCD's hfc-pci based cards (hfcpci.ko.xz) mISDN driver for hfc-4s/hfc-8s/hfc-e1 based cards (hfcmulti.ko.xz) mISDN driver for NETJet (netjet.ko.xz) mISDN driver for AVM FRITZ!CARD PCI ISDN cards (avmfritz.ko.xz) Storage Drivers NVMe over Fabrics TCP host (nvme-tcp.ko.xz) NVMe over Fabrics TCP target (nvmet-tcp.ko.xz) device-mapper writecache target (dm-writecache.ko.xz) 6.3. Updated Drivers Network Driver Updates QLogic FastLinQ 4xxxx Ethernet Driver (qede.ko.xz) has been updated to version 8.37.0.20. QLogic FastLinQ 4xxxx Core Module (qed.ko.xz) has been updated to version 8.37.0.20. Broadcom BCM573xx network driver (bnxt_en.ko.xz) has been updated to version 1.10.0. QLogic BCM57710/57711/57711E/57712/57712_MF/57800/57800_MF/57810/57810_MF/57840/57840_MF Driver (bnx2x.ko.xz) has been updated to version 1.713.36-0. Intel(R) Gigabit Ethernet Network Driver (igb.ko.xz) has been updated to version 5.6.0-k. Intel(R) 10 Gigabit Virtual Function Network Driver (ixgbevf.ko.xz) has been updated to version 4.1.0-k-rh8.1.0. Intel(R) 10 Gigabit PCI Express Network Driver (ixgbe.ko.xz) has been updated to version 5.1.0-k-rh8.1.0. Intel(R) Ethernet Switch Host Interface Driver (fm10k.ko.xz) has been updated to version 0.26.1-k. Intel(R) Ethernet Connection E800 Series Linux Driver (ice.ko.xz) has been updated to version 0.7.4-k. Intel(R) Ethernet Connection XL710 Network Driver (i40e.ko.xz) has been updated to version 2.8.20-k. The Netronome Flow Processor (NFP) driver (nfp.ko.xz) has been updated to version 4.18.0-147.el8.x86_64. Elastic Network Adapter (ENA) (ena.ko.xz) has been updated to version 2.0.3K. Graphics and Miscellaneous Driver Updates Standalone drm driver for the VMware SVGA device (vmwgfx.ko.xz) has been updated to version 2.15.0.0. hpe watchdog driver (hpwdt.ko.xz) has been updated to version 2.0.2. Storage Driver Updates Driver for HP Smart Array Controller version 3.4.20-170-RH3 (hpsa.ko.xz) has been updated to version 3.4.20-170-RH3. LSI MPT Fusion SAS 3.0 Device Driver (mpt3sas.ko.xz) has been updated to version 28.100.00.00. Emulex LightPulse Fibre Channel SCSI driver 12.2.0.3 (lpfc.ko.xz) has been updated to version 0:12.2.0.3. QLogic QEDF 25/40/50/100Gb FCoE Driver (qedf.ko.xz) has been updated to version 8.37.25.20. Cisco FCoE HBA Driver (fnic.ko.xz) has been updated to version 1.6.0.47. QLogic Fibre Channel HBA Driver (qla2xxx.ko.xz) has been updated to version 10.01.00.15.08.1-k1. Driver for Microsemi Smart Family Controller version 1.2.6-015 (smartpqi.ko.xz) has been updated to version 1.2.6-015. QLogic FastLinQ 4xxxx iSCSI Module (qedi.ko.xz) has been updated to version 8.33.0.21. Broadcom MegaRAID SAS Driver (megaraid_sas.ko.xz) has been updated to version 07.707.51.00-rc1. 6.4. Bug fixes This part describes bugs fixed in Red Hat Enterprise Linux 8.1 that have a significant impact on users. 6.4.1. Installer and image creation Using the version or inst.version kernel boot parameters no longer stops the installation program Previously, booting the installation program from the kernel command line using the version or inst.version boot parameters printed the version, for example anaconda 30.25.6 , and stopped the installation program. With this update, the version and inst.version parameters are ignored when the installation program is booted from the kernel command line, and as a result, the installation program is not stopped. (BZ#1637472) The xorg-x11-drv-fbdev , xorg-x11-drv-vesa , and xorg-x11-drv-vmware video drivers are now installed by default Previously, workstations with specific models of NVIDIA graphics cards and workstations with specific AMD accelerated processing units did not display the graphical login window after a RHEL 8.0 Server installation. This issue also impacted virtual machines relying on EFI for graphics support, such as Hyper-V. With this update, the xorg-x11-drv-fbdev , xorg-x11-drv-vesa , and xorg-x11-drv-vmware video drivers are installed by default and the graphical login window is displayed after a RHEL 8.0 and later Server installation. (BZ#1687489) Rescue mode no longer fails without displaying an error message Previously, running rescue mode on a system with no Linux partitions resulted in the installation program failing with an exception. With this update, the installation program displays the error message "You don't have any Linux partitions" when a system with no Linux partitions is detected. (BZ#1628653) The installation program now sets the lvm_metadata_backup Blivet flag for image installations Previously, the installation program failed to set the lvm_metadata_backup Blivet flag for image installations. As a consequence, LVM backup files were located in the /etc/lvm/ subdirectory after an image installation. With this update, the installation program sets the lvm_metadata_backup Blivet flag, and as a result, there are no LVM backup files located in the /etc/lvm/ subdirectory after an image installation. (BZ#1673901) The RHEL 8 installation program now handles strings from RPM Previously, when the python3-rpm library returned a string, the installation program failed with an exception. With this update, the installation program can now handle strings from RPM. ( BZ#1689909 ) The inst.repo kernel boot parameter now works for a repository on a hard drive that has a non-root path Previously, the RHEL 8 installation process could not proceed without manual intervention if the inst.repo=hd:<device>:<path> kernel boot parameter was pointing to a repository (not an ISO image) on a hard drive, and a non-root (/) path was used. With this update, the installation program can now propagate any <path> for a repository located on a hard drive, ensuring the installation proceeds as normal. ( BZ#1689194 ) The --changesok option now allows the installation program to change the root password Previously, using the --changesok option when installing Red Hat Enterprise Linux 8 from a Kickstart file did not allow the installation program to change the root password. With this update, the --changesok option is successfully passed by Kickstart, and as a result, users specifying the pwpolicy root -changesok option in their Kickstart file can now change the root password using the GUI, even if the password has already been set by Kickstart. (BZ#1584145) Image Building no longer fails when using lorax-composer API Previously, when using lorax-composer API from a subscribed RHEL system, the image building process always failed. Anaconda could not access the repositories, because the subscription certificates from the host are not passed through. To fix the issue update lorax-composer , pykickstart , and Anaconda packages. That will allow to pass supported CDN certificates. ( BZ#1663950 ) 6.4.2. Shells and command-line tools systemd in debug mode no longer produces unnecessary log messages When using the systemd system and service manager in debug mode, systemd previously produced unnecessary and harmless log messages that started with: With this update, systemd has been fixed to no longer produce these unnecessary debug messages. ( BZ#1658691 ) 6.4.3. Security fapolicyd no longer prevents RHEL updates When an update replaces the binary of a running application, the kernel modifies the application binary path in memory by appending the " (deleted)" suffix. Previously, the fapolicyd file access policy daemon treated such applications as untrusted, and prevented them from opening and executing any other files. As a consequence, the system was sometimes unable to boot after applying updates. With the release of the RHBA-2020:5241 advisory, fapolicyd ignores the suffix in the binary path so the binary can match the trust database. As a result, fapolicyd enforces the rules correctly and the update process can finish. (BZ#1897092) SELinux no longer prevents Tomcat from sending emails Prior to this update, the SELinux policy did not allow the tomcat_t and pki_tomcat_t domains to connect to SMTP ports. Consequently, SELinux denied applications on the Tomcat server from sending emails. With this update of the selinux-policy packages, the policy allows processes from the Tomcat domains access SMTP ports, and SELinux no longer prevents applications on Tomcat from sending emails. (BZ#1687798) lockdev now runs correctly with SELinux Previously, the lockdev tool could not transition into the lockdev_t context even though the SELinux policy for lockdev_t was defined. As a consequence, lockdev was allowed to run in the 'unconfined_t' domain when used by the root user. This introduced vulnerabilities into the system. With this update, the transition into lockdev_t has been defined, and lockdev can now be used correctly with SELinux in enforcing mode. (BZ#1673269) iotop now runs correctly with SELinux Previously, the iotop tool could not transition into the iotop_t context even though the SELinux policy for iotop_t was defined. As a consequence, iotop was allowed to run in the 'unconfined_t' domain when used by the root user. This introduced vulnerabilities into the system. With this update, the transition into iotop_t has been defined, and iotop can now be used correctly with SELinux in enforcing mode. (BZ#1671241) SELinux now properly handles NFS 'crossmnt' The NFS protocol with the crossmnt option automatically creates internal mounts when a process accesses a subdirectory already used as a mount point on the server. Previously, this caused SELinux to check whether the process accessing an NFS mounted directory had a mount permission, which caused AVC denials. In the current version, SELinux permission checking skips these internal mounts. As a result, accessing an NFS directory that is mounted on the server side does not require mount permission. (BZ#1647723) An SELinux policy reload no longer causes false ENOMEM errors Reloading the SELinux policy previously caused the internal security context lookup table to become unresponsive. Consequently, when the kernel encountered a new security context during a policy reload, the operation failed with a false "Out of memory" (ENOMEM) error. With this update, the internal Security Identifier (SID) lookup table has been redesigned and no longer freezes. As a result, the kernel no longer returns misleading ENOMEM errors during an SELinux policy reload. (BZ#1656787) Unconfined domains can now use smc_socket Previously, the SELinux policy did not have the allow rules for the smc_socket class. Consequently, SELinux blocked an access to smc_socket for the unconfined domains. With this update, the allow rules have been added to the SELinux policy. As a result, the unconfined domains can use smc_socket . (BZ#1683642) Kerberos cleanup procedures are now compatible with GSSAPIDelegateCredentials and default cache from krb5.conf Previously, when the default_ccache_name option was configured in the krb5.conf file, the kerberos credentials were not cleaned up with the GSSAPIDelegateCredentials and GSSAPICleanupCredentials options set. This bug is now fixed by updating the source code to clean up credential caches in the described use cases. After the configuration, the credential cache gets cleaned up on exit if the user configures it. ( BZ#1683295 ) OpenSSH now correctly handles PKCS #11 URIs for keys with mismatching labels Previously, specifying PKCS #11 URIs with the object part (key label) could prevent OpenSSH from finding related objects in PKCS #11. With this update, the label is ignored if the matching objects are not found, and keys are matched only by their IDs. As a result, OpenSSH is now able to use keys on smart cards referenced using full PKCS #11 URIs. (BZ#1671262) SSH connections with VMware-hosted systems now work properly The version of the OpenSSH suite introduced a change of the default IP Quality of Service (IPQoS) flags in SSH packets, which was not correctly handled by the VMware virtualization platform. Consequently, it was not possible to establish an SSH connection with systems on VMware. The problem has been fixed in VMWare Workstation 15, and SSH connections with VMware-hosted systems now work correctly. (BZ#1651763) curve25519-sha256 is now supported by default in OpenSSH Previously, the curve25519-sha256 SSH key exchange algorithm was missing in the system-wide crypto policies configurations for the OpenSSH client and server even though it was compliant with the default policy level. As a consequence, if a client or a server used curve25519-sha256 and this algorithm was not supported by the host, the connection might fail. This update of the crypto-policies package fixes the bug, and SSH connections no longer fail in the described scenario. ( BZ#1678661 ) Ansible playbooks for OSPP and PCI-DSS profiles no longer exit after encountering a failure Previously, Ansible remediations for the Security Content Automation Protocol (OSPP) and the Payment Card Industry Data Security Standard (PCI-DSS) profiles failed due to incorrect ordering and other errors in the remediations. This update fixes the ordering and errors in generated Ansible remediation playbooks, and Ansible remediations now work correctly. ( BZ#1741455 ) Audit transport=KRB5 now works properly Prior to this update, Audit KRB5 transport mode did not work correctly. Consequently, Audit remote logging using the Kerberos peer authentication did not work. With this update, the problem has been fixed, and Audit remote logging now works properly in the described scenario. ( BZ#1730382 ) 6.4.4. Networking The kernel now supports destination MAC addresses in bitmap:ipmac , hash:ipmac , and hash:mac IP set types Previously, the kernel implementation of the bitmap:ipmac , hash:ipmac , and hash:mac IP set types only allowed matching on the source MAC address, while destination MAC addresses could be specified, but were not matched against set entries. As a consequence, administrators could create iptables rules that used a destination MAC address in one of these IP set types, but packets matching the given specification were not actually classified. With this update, the kernel compares the destination MAC address and returns a match if the specified classification corresponds to the destination MAC address of a packet. As a result, rules that match packets against the destination MAC address now work correctly. (BZ#1649087) The gnome-control-center application now supports editing advanced IPsec settings Previously, the gnome-control-center application only displayed the advanced options of IPsec VPN connections. Consequently, users could not change these settings. With this update, the fields in the advanced settings are now editable, and users can save the changes. ( BZ#1697329 ) The TRACE target in the iptables-extensions(8) man page has been updated Previously, the description of the TRACE target in the iptables-extensions(8) man page referred only to the compat variant, but Red Hat Enterprise Linux 8 uses the nf_tables variant. As a consequence, the man page did not reference the xtables-monitor command-line utility to display TRACE events. The man page has been updated and, as a result, now mentions xtables-monitor . ( BZ#1658734 ) Error logging in the ipset service has been improved Previously, the ipset service did not report configuration errors with a meaningful severity in the systemd logs. The severity level for invalid configuration entries was only informational , and the service did not report errors for an unusable configuration. As a consequence, it was difficult for administrators to identify and troubleshoot issues in the ipset service's configuration. With this update, ipset reports configuration issues as warnings in systemd logs and, if the service fails to start, it logs an entry with the error severity including further details. As a result, it is now easier to troubleshoot issues in the configuration of the ipset service. ( BZ#1683711 ) The ipset service now ignores invalid configuration entries during startup The ipset service stores configurations as sets in separate files. Previously, when the service started, it restored the configuration from all sets in a single operation, without filtering invalid entries that can be inserted by manually editing a set. As a consequence, if a single configuration entry was invalid, the service did not restore further unrelated sets. The problem has been fixed. As a result, the ipset service detects and removes invalid configuration entries during the restore operation, and ignores invalid configuration entries. ( BZ#1683713 ) The ipset list command reports consistent memory for hash set types When you add entries to a hash set type, the ipset utility must resize the in-memory representation to for new entries by allocating an additional memory block. Previously, ipset set the total per-set allocated size to only the size of the new block instead of adding the value to the current in-memory size. As a consequence, the ip list command reported an inconsistent memory size. With this update, ipset correctly calculates the in-memory size. As a result, the ipset list command now displays the correct in-memory size of the set, and the output matches the actual allocated memory for hash set types. (BZ#1714111) The kernel now correctly updates PMTU when receiving ICMPv6 Packet Too Big message In certain situations, such as for link-local addresses, more than one route can match a source address. Previously, the kernel did not check the input interface when receiving Internet Control Message Protocol Version 6 (ICMPv6) packets. Therefore, the route lookup could return a destination that did not match the input interface. Consequently, when receiving an ICMPv6 Packet Too Big message, the kernel could update the Path Maximum Transmission Unit (PMTU) for a different input interface. With this update, the kernel checks the input interface during the route lookup. As a result, the kernel now updates the correct destination based on the source address and PMTU works as expected in the described scenario. (BZ#1721961) The /etc/hosts.allow and /etc/hosts.deny files no longer contain outdated references to removed tcp_wrappers Previously, the /etc/hosts.allow and /etc/hosts.deny files contained outdated information about the tcp_wrappers package. The files are removed in RHEL 8 as they are no longer needed for tcp_wrappers which is removed. ( BZ#1663556 ) 6.4.5. Kernel tpm2-abrmd-selinux now has a proper dependency on selinux-policy-targeted Previously, the tpm2-abrmd-selinux package had a dependency on the selinux-policy-base package instead of the selinux-policy-targeted package. Consequently, if a system had selinux-policy-minimum installed instead of selinux-policy-targeted , installation of the tpm2-abrmd-selinux package failed. This update fixes the bug and tpm2-abrmd-selinux can be installed correctly in the described scenario. (BZ#1642000) All /sys/kernel/debug files can be accessed Previously, the return value for "Operation not permitted" (EPERM) error remained set until the end of the function regardless of the error. Consequently, any attempts to access certain /sys/kernel/debug (debugfs) files failed with an unwarranted EPERM error. This update moves the EPERM return value to the following block. As a result, debugfs files can be accessed without problems in the described scenario. (BZ#1686755) NICs are no longer affected by a bug in the qede driver for the 41000 and 45000 FastLinQ series Previously, firmware upgrade and debug data collection operations failed due to a bug in the qede driver for the 41000 and 45000 FastLinQ series. It made the NIC unusable. The reboot (PCI reset) of the host made the NIC operational again. This issue could occur in the following scenarios: during the upgrade of Firmware of the NIC using the inbox driver during the collection of debug data running the ethtool -d ethx command while running an sosreport command that included ethtool -d ethx. during the initiation of automatic debug data collection by the inbox driver, such as I/O timeout, Mail Box Command time-out and a Hardware Attention. To fix this issue, Red Hat released an erratum via Red Hat Bug Advisory (RHBA). Before the release of RHBA, it was recommended to create a case in https://access.redhat.com/support to request for supported fix. (BZ#1697310) The generic EDAC GHES driver now detects which DIMM reported an error Previously, the EDAC GHES driver was not able to detect which DIMM reported an error. Consequently, the following error message appeared: The driver has been now updated to scan the DMI (SMBIOS) tables to detect the specific DIMM that matches the Desktop Management Interface (DMI) handle 0x<ADDRESS> . As a result, EDAC GHES correctly detects which specific DIMM reported a hardware error. (BZ#1721386) podman is able to checkpoint containers in RHEL 8 Previously, the version of the Checkpoint and Restore In Userspace (CRIU) package was outdated. Consequently, CRIU did not support container checkpoint and restore functionality, and the podman utility failed to checkpoint containers. When running the podman container checkpoint command, the following error message was displayed: This update fixes the problem by upgrading the version of the CRIU package. As a result, podman now supports container checkpoint and restore functionality. (BZ#1689746) early-kdump and standard kdump no longer fail if the add_dracutmodules+=earlykdump option is used in dracut.conf Previously, an inconsistency occurred between the kernel version being installed for early-kdump and the kernel version initramfs was generated for. As a consequence, booting failed when early-kdump was enabled. In addition, if early-kdump detected that it was being included in a standard kdump initramfs image, it forced an exit. Therefore the standard kdump service also failed when trying to rebuild kdump initramfs if early-kdump was added as a default dracut module. As a consequence, early-kdump and standard kdump both failed. With this update, early-kdump uses the consistent kernel name during the installation, only the version differs from the running kernel. Also, the standard kdump service will forcibly drop early-kdump to avoid image generation failure. As a result, early-kdump and standard kdump no longer fail in the described scenario. (BZ#1662911) The first kernel with SME enabled now succeeds in dumping the vmcore Previously, the encrypted memory in the first kernel with the active Secure Memory Encryption (SME) feature caused a failure of the kdump mechanism. Consequently, the first kernel was not able to dump the contents (vmcore) of its memory. With this update, the ioremap_encrypted() function has been added to remap the encrypted memory and modify the related code. As a result, the encrypted first kernel's memory is now properly accessed, and the vmcore can be dumped and parsed by the crash tools in the described scenario. (BZ#1564427) The first kernel with SEV enabled now succeeds in dumping the vmcore Previously, the encrypted memory in the first kernel with the active Secure Encrypted Virtualization (SEV) feature caused a failure of the kdump mechanism. Consequently, the first kernel was not able to dump the contents (vmcore) of its memory. With this update, the ioremap_encrypted() function has been added to remap the encrypted memory and modify the related code. As a result, the first kernel's encrypted memory is now properly accessed, and the vmcore can be dumped and parsed by the crash tools in the described scenario. (BZ#1646810) Kernel now reserves more space for SWIOTLB Previously, when Secure Encrypted Virtualization (SEV) or Secure Memory Encryption (SME) features was enabled in the kernel, the Software Input Output Translation Lookaside Buffer (SWIOTLB) technology had to be enabled as well and consumed a significant amount of memory. Consequently, the capture kernel failed to boot or got an out-of-memory error. This update fixes the bug by reserving extra crashkernel memory for SWIOTLB while SEV/SME is active. As a result, the capture kernel has more memory reserved for SWIOTLB and the bug no longer appears in the described scenario. (BZ#1728519) C-state transitions can now be disabled during hwlatdetect runs To achieve real-time performance, the hwlatdetect utility needs to be able to disable power saving in the CPU during test runs. This update allows hwlatdetect to turn off C-state transitions for the duration of the test run and hwlatdetect is now able to detect hardware latencies more accurately. ( BZ#1707505 ) 6.4.6. Hardware enablement The openmpi package can be installed now Previously, a rebase on opensm package changed its soname mechanism. As a consequence, the openmpi package could not be installed due to unresolved dependencies. This update fixes the problem. As a result, the openmpi package can be installed now without any issue. (BZ#1717289) 6.4.7. File systems and storage The RHEL 8 installation program now uses the entry ID to set the default boot entry Previously, the RHEL 8 installation program used the index of the first boot entry as the default, instead of using the entry ID. As a consequence, adding a new boot entry became the default, as it was sorted first and set to the first index. With this update, the installation program uses the entry ID to set the default boot entry, and as a result, the default entry is not changed, even if boot entries are added and sorted before the default. ( BZ#1671047 ) The system now boots successfully when SME is enabled with smartpqi Previously, the system failed to boot on certain AMD machines when the Secure Memory Encryption (SME) feature was enabled and the root disk was using the smartpqi driver. When the boot failed, the system displayed a message similar to the following in the boot log: This problem was caused by the smartpqi driver, which was falling back to the Software Input Output Translation Lookaside Buffer (SWIOTLB) because the coherent Direct Memory Access (DMA) mask was not set. With this update, the coherent DMA mask is now correctly set. As a result, the system now boots successfully when SME is enabled on machines that use the smartpqi driver for the root disk. (BZ#1712272) FCoE LUNs do not disappear after being created on the bnx2fc cards Previously, after creating a FCoE LUN on the bnx2fc cards, the FCoE LUNs were not attached correctly. As a consequence, FCoE LUNs disappeared after being created on the bnx2fc cards on RHEL 8.0. With this update, FCoE LUNs are attached correctly. As a result, it is now possible to discover the FCoE LUNs after they are created on the bnx2fc cards. (BZ#1685894) VDO volumes no longer lose deduplication advice after moving to a different-endian platform Previously, the Universal Deduplication Service (UDS) index lost all deduplication advice after moving the VDO volume to a platform that used a different endian. As a consequence, VDO was unable to deduplicate newly written data against the data that was stored before you moved the volume, leading to lower space savings. With this update, you can now move VDO volumes between platforms that use different endians without losing deduplication advice. ( BZ#1696492 ) kdump service works on large IBM POWER systems Previously, RHEL8 kdump kernel did not start. As a consequence, the kdump initrd file on large IBM POWER systems was not created. With this update, squashfs-tools-4.3-19.el8 component is added. This update adds a limit (128) to the number of CPUs which the squashfs-tools-4.3-19.el8 component can use from the available pool (instead of using all the available CPUs). This fixes the running out of resources error. As a result, kdump service now works on large IBM POWER systems. (BZ#1716278) Verbosity debug options now added to nfs.conf Previously, the /etc/nfs.conf file and the nfs.conf(5) man page did not include the following options: verbosity rpc-verbosity As a consequence, users were unaware of the availability of these debug flags. With this update, these flags are now included in the [gssd] section of the /etc/nfs.conf file and are also documented in the nfs.conf(8) man page. (BZ#1668026) 6.4.8. Dynamic programming languages, web and database servers Socket::inet_aton() can now be used from multiple threads safely Previously, the Socket::inet_aton() function, used for resolving a domain name from multiple Perl threads, called the unsafe gethostbyname() glibc function. Consequently, an incorrect IPv4 address was occasionally returned, or the Perl interpreter terminated unexpectedly. With this update, the Socket::inet_aton() implementation has been changed to use the thread-safe getaddrinfo() glibc function instead of gethostbyname() . As a result, the inet_aton() function from Perl Socket module can be used from multiple threads safely. ( BZ#1699793 , BZ#1699958 ) 6.4.9. Compilers and development tools gettext returns untranslated text even when out of memory Previously, the gettext() function for text localization returned the NULL value instead of text when out of memory, resulting in applications lacking text output or labels. The bug has been fixed and now, gettext() - returns untranslated text when out of memory as expected. ( BZ#1663035 ) The locale command now warns about LOCPATH being set whenever it encounters an error during execution Previously, the locale command did not provide any diagnostics for the LOCPATH environment variable when it encountered errors due to an invalid LOCPATH . The locale command is now set to warn that LOCPATH has been set any time it encounters an error during execution. As a result, locale now reports LOCPATH along with any underlying errors that it encounters. ( BZ#1701605 ) gdb now can read and correctly represent z registers in core files on aarch64 SVE Previously, the gdb component failed to read z registers from core files with aarch64 scalable vector extension (SVE) architecture. With this update, the gdb component is now able to read z registers from core files. As a result, the info register command successfully shows the z register contents. (BZ#1669953) GCC rebased to version 8.3.1 The GNU Compiler Collection (GCC) has been updated to upstream version 8.3.1. This version brings a large number of miscellaneous bug fixes. ( BZ#1680182 ) 6.4.10. Identity Management FreeRADIUS now resolves hostnames pointing to IPv6 addresses In RHEL 8 versions of FreeRADIUS, the ipaddr utility only supported IPv4 addresses. Consequently, for the radiusd daemon to resolve IPv6 addresses, a manual update of the configuration was required after an upgrade of the system from RHEL 7 to RHEL 8. This update fixes the underlying code, and ipaddr in FreeRADIUS now uses IPv6 addresses, too. ( BZ#1685546 ) The Nuxwdog service no longer fails to start the PKI server in HSM environments Previously, due to bugs, the keyutils package was not installed as a dependency of the pki-core package. Additionally, the Nuxwdog watchdog service failed to start the public key infrastructure (PKI) server in environments that use a hardware security module (HSM). These problems have been fixed. As a result, the required keyutils package is now installed automatically as a dependency, and Nuxwdog starts the PKI server as expected in environments with HSM. ( BZ#1695302 ) The IdM server now works correctly in the FIPS mode Previously, the SSL connector for Tomcat server was incompletely implemented. As a consequence, the Identity Management (IdM) server with an installed certificate server did not work on machines with the FIPS mode enabled. This bug has been fixed by adding JSSTrustManager and JSSKeyManager . As a result, the IdM server works correctly in the described scenario. Note that there are several bugs that prevent the IdM server from running in the FIPS mode in RHEL 8. This update fixes just one of them. ( BZ#1673296 ) The KCM credential cache is now suitable for a large number of credentials in a single credential cache Previously, if the Kerberos Credential Manager (KCM) contained a large number of credentials, Kerberos operations, such as kinit , failed due to a limitation of the size of entries in the database and the number of these entries. This update introduces the following new configuration options to the kcm section of the sssd.conf file: max_ccaches (integer) max_uid_ccaches (integer) max_ccache_size (integer) As a result, KCM can now handle a large number of credentials in a single ccache. For further information on the configuration options, see sssd-kcm man page . (BZ#1448094) Samba no longer denies access when using the sss ID mapping plug-in Previously, when you ran Samba on the domain member with this configuration and added a configuration that used the sss ID mapping back end to the /etc/samba/smb.conf file to share directories, changes in the ID mapping back end caused errors. Consequently, Samba denied access to files in certain cases, even if the user or group existed and it was known by SSSD. The problem has been fixed. As a result, Samba no longer denies access when using the sss plug-in. ( BZ#1657665 ) Default SSSD time-out values no longer conflict with each other Previously, there was a conflict between the default time-out values. The default values for the following options have been changed to improve the failover capability: dns_resolver_op_timeout - set to 2s (previously 6s) dns_resolver_timeout - set to 4s (previously 6s) ldap_opt_timeout - set to 8s (previously 6s) Also, a new dns_resolver_server_timeout option, with default value of 1000 ms has been added, which specifies the time out duration for SSSD to switch from one DNS server to another. (BZ#1382750) 6.4.11. Desktop systemctl isolate multi-user.target now displays the console prompt When running the systemctl isolate multi-user.target command from GNOME Terminal in a GNOME Desktop session, only a cursor was displayed, and not the console prompt. This update fixes gdm , and the console prompt is now displayed as expected in the described situation. ( BZ#1678627 ) 6.4.12. Graphics infrastructures The 'i915' display driver now supports display configurations up to 3x4K. Previously, it was not possible to have display configurations larger than 2x4K when using the 'i915' display driver in an Xorg session. With this update, the 'i915' driver now supports display configurations up to 3x4K. (BZ#1664969) Linux guests no longer display an error when initializing the GPU driver Previously, Linux guests returned a warning when initializing the GPU driver. This happened because Intel Graphics Virtualization Technology -g (GVT -g) only simulates the DisplayPort (DP) interface for guest and leaves the 'EDP_PSR_IMR' and 'EDP_PSR_IIR' registers as default memory-mapped I/O (MMIO) read/write registers. To resolve this issue, handlers have been added to these registers and the warning is no longer returned. (BZ#1643980) 6.4.13. The web console It is possible to login to RHEL web console with session_recording shell Previously, it was not possible for users of the tlog shell (which enables session recording) to log in to the RHEL web console. This update fixes the bug. The workaround of adding the tlog-rec-session shell to /etc/shells/ should be reverted after installing this update. (BZ#1631905) 6.4.14. Virtualization Hot-plugging PCI devices to a pcie-to-pci bridge controller works correctly Previously, if a guest virtual machine configuration contained a pcie-to-pci-bridge controller that had no endpoint devices attached to it at the time the guest was started, hot-plugging new devices to that controller was not possible. This update improves how hot-plugging legacy PCI devices on a PCIe system is handled, which prevents the problem from occurring. ( BZ#1619884 ) Enabling nested virtualization no longer blocks live migration Previously, the nested virtualization feature was incompatible with live migration. As a consequence, enabling nested virtualization on a RHEL 8 host prevented migrating any virtual machines (VMs) from the host, as well as saving VM state snapshots to disk. This update fixes the described problem, and the impacted VMs are now possible to migrate. ( BZ#1689216 ) 6.4.15. Supportability redhat-support-tool now creates an sosreport archive Previously, the redhat-support-tool utility was unable to create an sosreport archive. The workaround was running the sosreport command separately and then entering the redhat-support-tool addattachment -c command to upload the archive. Users can also use the web UI on Customer Portal which creates the customer case and uploads the sosreport archive. In addition, command options such as findkerneldebugs , btextract , analyze , or diagnose do not work as expected and will be fixed in a future release. ( BZ#1688274 ) 6.5. Technology Previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 8.1. For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope . 6.5.1. Networking TIPC has full support The Transparent Inter Process Communication ( TIPC ) is a protocol specially designed for efficient communication within clusters of loosely paired nodes. It works as a kernel module and provides a tipc tool in iproute2 package to allow designers to create applications that can communicate quickly and reliably with other applications regardless of their location within the cluster. This feature is now fully supported in RHEL 8. (BZ#1581898) eBPF for tc available as a Technology Preview As a Technology Preview, the Traffic Control (tc) kernel subsystem and the tc tool can attach extended Berkeley Packet Filtering (eBPF) programs as packet classifiers and actions for both ingress and egress queueing disciplines. This enables programmable packet processing inside the kernel network data path. ( BZ#1699825 ) nmstate available as a Technology Preview Nmstate is a network API for hosts. The nmstate packages, available as a Technology Preview, provide a library and the nmstatectl command-line utility to manage host network settings in a declarative manner. The networking state is described by a pre-defined schema. Reporting of the current state and changes to the desired state both conform to the schema. For further details, see the /usr/share/doc/nmstate/README.md file and the examples in the /usr/share/doc/nmstate/examples directory. (BZ#1674456) AF_XDP available as a Technology Preview Address Family eXpress Data Path ( AF_XDP ) socket is designed for high-performance packet processing. It accompanies XDP and grants efficient redirection of programmatically selected packets to user space applications for further processing. (BZ#1633143) XDP available as a Technology Preview The eXpress Data Path (XDP) feature, which is available as a Technology Preview, provides a means to attach extended Berkeley Packet Filter (eBPF) programs for high-performance packet processing at an early point in the kernel ingress data path, allowing efficient programmable packet analysis, filtering, and manipulation. (BZ#1503672) KTLS available as a Technology Preview In Red Hat Enterprise Linux 8, Kernel Transport Layer Security (KTLS) is provided as a Technology Preview. KTLS handles TLS records using the symmetric encryption or decryption algorithms in the kernel for the AES-GCM cipher. KTLS also provides the interface for offloading TLS record encryption to Network Interface Controllers (NICs) that support this functionality. (BZ#1570255) The systemd-resolved service is now available as a Technology Preview The systemd-resolved service provides name resolution to local applications. The service implements a caching and validating DNS stub resolver, an Link-Local Multicast Name Resolution (LLMNR), and Multicast DNS resolver and responder. Note that, even if the systemd package provides systemd-resolved , this service is an unsupported Technology Preview. (BZ#1906489) 6.5.2. Kernel Control Group v2 available as a Technology Preview in RHEL 8 Control Group v2 mechanism is a unified hierarchy control group. Control Group v2 organizes processes hierarchically and distributes system resources along the hierarchy in a controlled and configurable manner. Unlike the version, Control Group v2 has only a single hierarchy. This single hierarchy enables the Linux kernel to: Categorize processes based on the role of their owner. Eliminate issues with conflicting policies of multiple hierarchies. Control Group v2 supports numerous controllers: CPU controller regulates the distribution of CPU cycles. This controller implements: Weight and absolute bandwidth limit models for normal scheduling policy. Absolute bandwidth allocation model for real time scheduling policy. Memory controller regulates the memory distribution. Currently, the following types of memory usages are tracked: Userland memory - page cache and anonymous memory. Kernel data structures such as dentries and inodes. TCP socket buffers. I/O controller regulates the distribution of I/O resources. Writeback controller interacts with both Memory and I/O controllers and is Control Group v2 specific. The information above was based on link: https://www.kernel.org/doc/Documentation/cgroup-v2.txt . You can refer to the same link to obtain more information about particular Control Group v2 controllers. ( BZ#1401552 ) kexec fast reboot as a Technology Preview The kexec fast reboot feature, continues to be available as a Technology Preview. Rebooting is now significantly faster thanks to kexec fast reboot . To use this feature, load the kexec kernel manually, and then reboot the operating system. ( BZ#1769727 ) eBPF available as a Technology Preview Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions. The virtual machine includes a new system call bpf() , which supports creating various types of maps, and also allows to load programs in a special assembly-like code. The code is then loaded to the kernel and translated to the native machine code with just-in-time compilation. Note that the bpf() syscall can be successfully used only by a user with the CAP_SYS_ADMIN capability, such as the root user. See the bpf (2) man page for more information. The loaded programs can be attached onto a variety of points (sockets, tracepoints, packet reception) to receive and process data. There are numerous components shipped by Red Hat that utilize the eBPF virtual machine. Each component is in a different development phase, and thus not all components are currently fully supported. All components are available as a Technology Preview, unless a specific component is indicated as supported. The following notable eBPF components are currently available as a Technology Preview: The BPF Compiler Collection (BCC) tools package, a collection of dynamic kernel tracing utilities that use the eBPF virtual machine. The BCC tools package is available as a Technology Preview on the following architectures: the 64-bit ARM architecture, IBM Power Systems, Little Endian, and IBM Z. Note that it is fully supported on the AMD and Intel 64-bit architectures. bpftrace , a high-level tracing language that utilizes the eBPF virtual machine. The eXpress Data Path (XDP) feature, a networking technology that enables fast packet processing in the kernel using the eBPF virtual machine. (BZ#1559616) Soft-RoCE available as a Technology Preview Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) is a network protocol which implements RDMA over Ethernet. Soft-RoCE is the software implementation of RoCE which supports two protocol versions, RoCE v1 and RoCE v2. The Soft-RoCE driver, rdma_rxe , is available as an unsupported Technology Preview in RHEL 8. (BZ#1605216) 6.5.3. Hardware enablement The igc driver available as a Technology Preview for RHEL 8 The igc Intel 2.5G Ethernet Linux wired LAN driver is now available on all architectures for RHEL 8 as a Technology Preview. The ethtool utility also supports igc wired LANs. (BZ#1495358) 6.5.4. File systems and storage NVMe/TCP is available as a Technology Preview Accessing and sharing Nonvolatile Memory Express (NVMe) storage over TCP/IP networks (NVMe/TCP) and its corresponding nvme-tcp.ko and nvmet-tcp.ko kernel modules have been added as a Technology Preview. The use of NVMe/TCP as either a storage client or a target is manageable with tools provided by the nvme-cli and nvmetcli packages. NVMe/TCP provides a storage transport option along with the existing NVMe over Fabrics (NVMe-oF) transport, which include Remote Direct Memory Access (RDMA) and Fibre Channel (NVMe/FC). (BZ#1696451) File system DAX is now available for ext4 and XFS as a Technology Preview In Red Hat Enterprise Linux 8.1, file system DAX is available as a Technology Preview. DAX provides a means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application's address space. (BZ#1627455) OverlayFS OverlayFS is a type of union file system. It enables you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media. OverlayFS remains a Technology Preview under most circumstances. As such, the kernel logs warnings when this technology is activated. Full support is available for OverlayFS when used with supported container engines ( podman , cri-o , or buildah ) under the following restrictions: OverlayFS is supported for use only as a container engine graph driver or other specialized use cases, such as squashed kdump initramfs. Its use is supported primarily for container COW content, not for persistent storage. You must place any persistent storage on non-OverlayFS volumes. You can use only the default container engine configuration: one level of overlay, one lowerdir, and both lower and upper levels are on the same file system. Only XFS is currently supported for use as a lower layer file system. Additionally, the following rules and limitations apply to using OverlayFS: The OverlayFS kernel ABI and user-space behavior are not considered stable, and might change in future updates. OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. The following cases are not POSIX-compliant: Lower files opened with O_RDONLY do not receive st_atime updates when the files are read. Lower files opened with O_RDONLY , then mapped with MAP_SHARED are inconsistent with subsequent modification. Fully compliant st_ino or d_ino values are not enabled by default on RHEL 8, but you can enable full POSIX compliance for them with a module option or mount option. To get consistent inode numbering, use the xino=on mount option. You can also use the redirect_dir=on and index=on options to improve POSIX compliance. These two options make the format of the upper layer incompatible with an overlay without these options. That is, you might get unexpected results or errors if you create an overlay with redirect_dir=on or index=on , unmount the overlay, then mount the overlay without these options. To determine whether an existing XFS file system is eligible for use as an overlay, use the following command and see if the ftype=1 option is enabled: SELinux security labels are enabled by default in all supported container engines with OverlayFS. Several known issues are associated with OverlayFS in this release. For details, see Non-standard behavior in the Linux kernel documentation . For more information about OverlayFS, see the Linux kernel documentation . (BZ#1690207) Stratis is now available as a Technology Preview Stratis is a new local storage manager. It provides managed file systems on top of pools of storage with additional features to the user. Stratis enables you to more easily perform storage tasks such as: Manage snapshots and thin provisioning Automatically grow file system sizes as needed Maintain file systems To administer Stratis storage, use the stratis utility, which communicates with the stratisd background service. Stratis is provided as a Technology Preview. For more information, see the Stratis documentation: Setting up Stratis file systems . (JIRA:RHELPLAN-1212) A Samba server, available to IdM and AD users logged into IdM hosts, can now be set up on an IdM domain member as a Technology Preview With this update, you can now set up a Samba server on an Identity Management (IdM) domain member. The new ipa-client-samba utility provided by the same-named package adds a Samba-specific Kerberos service principal to IdM and prepares the IdM client. For example, the utility creates the /etc/samba/smb.conf with the ID mapping configuration for the sss ID mapping back end. As a result, administrators can now set up Samba on an IdM domain member. Due to IdM Trust Controllers not supporting the Global Catalog Service, AD-enrolled Windows hosts cannot find IdM users and groups in Windows. Additionally, IdM Trust Controllers do not support resolving IdM groups using the Distributed Computing Environment / Remote Procedure Calls (DCE/RPC) protocols. As a consequence, AD users can only access the Samba shares and printers from IdM clients. For details, see Setting up Samba on an IdM domain member . (JIRA:RHELPLAN-13195) 6.5.5. High availability and clusters Pacemaker podman bundles available as a Technology Preview Pacemaker container bundles now run on the podman container platform, with the container bundle feature being available as a Technology Preview. There is one exception to this feature being Technology Preview: Red Hat fully supports the use of Pacemaker bundles for Red Hat Openstack. (BZ#1619620) Heuristics in corosync-qdevice available as a Technology Preview Heuristics are a set of commands executed locally on startup, cluster membership change, successful connect to corosync-qnetd , and, optionally, on a periodic basis. When all commands finish successfully on time (their return error code is zero), heuristics have passed; otherwise, they have failed. The heuristics result is sent to corosync-qnetd where it is used in calculations to determine which partition should be quorate. ( BZ#1784200 ) New fence-agents-heuristics-ping fence agent As a Technology Preview, Pacemaker now supports the fence_heuristics_ping agent. This agent aims to open a class of experimental fence agents that do no actual fencing by themselves but instead exploit the behavior of fencing levels in a new way. If the heuristics agent is configured on the same fencing level as the fence agent that does the actual fencing but is configured before that agent in sequence, fencing issues an off action on the heuristics agent before it attempts to do so on the agent that does the fencing. If the heuristics agent gives a negative result for the off action it is already clear that the fencing level is not going to succeed, causing Pacemaker fencing to skip the step of issuing the off action on the agent that does the fencing. A heuristics agent can exploit this behavior to prevent the agent that does the actual fencing from fencing a node under certain conditions. A user might want to use this agent, especially in a two-node cluster, when it would not make sense for a node to fence the peer if it can know beforehand that it would not be able to take over the services properly. For example, it might not make sense for a node to take over services if it has problems reaching the networking uplink, making the services unreachable to clients, a situation which a ping to a router might detect in that case. (BZ#1775847) 6.5.6. Identity Management Identity Management JSON-RPC API available as Technology Preview An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as Technology Preview. In Red Hat Enterprise Linux 7.3, the IdM API was enhanced to enable multiple versions of API commands. Previously, enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables: Administrators to use or later versions of IdM on the server than on the managing client. Developers to use a specific version of an IdM call, even if the IdM version changes on the server. In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. For details on using the API, see Using the Identity Management API to Communicate with the IdM Server (TECHNOLOGY PREVIEW) . ( BZ#1664719 ) DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now support DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2: http://tools.ietf.org/html/rfc6781#section-2 Secure Domain Name System (DNS) Deployment Guide: http://dx.doi.org/10.6028/NIST.SP.800-81-2 DNSSEC Key Rollover Timing Considerations: http://tools.ietf.org/html/rfc7583 Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices. ( BZ#1664718 ) 6.5.7. Graphics infrastructures VNC remote console available as a Technology Preview for the 64-bit ARM architecture On the 64-bit ARM architecture, the Virtual Network Computing (VNC) remote console is available as a Technology Preview. Note that the rest of the graphics stack is currently unverified for the 64-bit ARM architecture. (BZ#1698565) 6.5.8. Red Hat Enterprise Linux system roles The postfix role of RHEL system roles available as a Technology Preview Red Hat Enterprise Linux system roles provides a configuration interface for Red Hat Enterprise Linux subsystems, which makes system configuration easier through the inclusion of Ansible Roles. This interface enables managing system configurations across multiple versions of Red Hat Enterprise Linux, as well as adopting new major releases. The rhel-system-roles packages are distributed through the AppStream repository. The postfix role is available as a Technology Preview. The following roles are fully supported: kdump network selinux storage timesync For more information, see the Knowledgebase article about RHEL system roles . (BZ#1812552) rhel-system-roles-sap available as a Technology Preview The rhel-system-roles-sap package provides Red Hat Enterprise Linux (RHEL) system roles for SAP, which can be used to automate the configuration of a RHEL system to run SAP workloads. These roles greatly reduce the time to configure a system to run SAP workloads by automatically applying the optimal settings that are based on best practices outlined in relevant SAP Notes. Access is limited to RHEL for SAP Solutions offerings. Please contact Red Hat Customer Support if you need assistance with your subscription. The following new roles in the rhel-system-roles-sap package are available as a Technology Preview: sap-preconfigure sap-netweaver-preconfigure sap-hana-preconfigure For more information, see Red Hat Enterprise Linux system roles for SAP . Note: RHEL 8.1 for SAP Solutions is scheduled to be validated for use with SAP HANA on Intel 64 architecture and IBM POWER9. Other SAP applications and database products, for example, SAP NetWeaver and SAP ASE, can use RHEL 8.1 features. Please consult SAP Notes 2369910 and 2235581 for the latest information about validated releases and SAP support. (BZ#1660832) rhel-system-roles-sap rebased to version 1.1.1 With the RHBA-2019:4258 advisory, the rhel-system-roles-sap package has been updated to provide multiple bug fixes. Notably: SAP system roles work on hosts with non-English locales kernel.pid_max is set by the sysctl module nproc is set to unlimited for HANA (see SAP note 2772999 step 9) hard process limit is set before soft process limit code that sets process limits now works identically to role sap-preconfigure handlers/main.yml only works for non-uefi systems and is silently ignored on uefi systems removed unused dependency on rhel-system-roles removed libssh2 from the sap_hana_preconfigure_packages added further checks to avoid failures when certain CPU settings are not supported converted all true and false to lowercase updated minimum package handling host name and domain name set correctly many minor fixes The rhel-system-roles-sap package is available as a Technology Preview. (BZ#1766622) 6.5.9. Virtualization Select Intel network adapters now support SR-IOV in RHEL guests on Hyper-V As a Technology Preview, Red Hat Enterprise Linux guest operating systems running on a Hyper-V hypervisor can now use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters supported by the ixgbevf and iavf drivers. This feature is enabled when the following conditions are met: SR-IOV support is enabled for the network interface controller (NIC) SR-IOV support is enabled for the virtual NIC SR-IOV support is enabled for the virtual switch The virtual function (VF) from the NIC is attached to the virtual machine. The feature is currently supported with Microsoft Windows Server 2019 and 2016. (BZ#1348508) KVM virtualization is usable in RHEL 8 Hyper-V virtual machines As a Technology Preview, nested KVM virtualization can now be used on the Microsoft Hyper-V hypervisor. As a result, you can create virtual machines on a RHEL 8 guest system running on a Hyper-V host. Note that currently, this feature only works on Intel systems. In addition, nested virtualization is in some cases not enabled by default on Hyper-V. To enable it, see the following Microsoft documentation: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/nested-virtualization (BZ#1519039) AMD SEV for KVM virtual machines As a Technology Preview, RHEL 8 introduces the Secure Encrypted Virtualization (SEV) feature for AMD EPYC host machines that use the KVM hypervisor. If enabled on a virtual machine (VM), SEV encrypts VM memory so that the host cannot access data on the VM. This increases the security of the VM if the host is successfully infected by malware. Note that the number of VMs that can use this feature at a time on a single host is determined by the host hardware. Current AMD EPYC processors support up to 15 running VMs using SEV. Also note that for VMs with SEV configured to be able to boot, you must also configure the VM with a hard memory limit. To do so, add the following to the VM's XML configuration: The recommended value for N is equal to or greater then the guest RAM + 256 MiB. For example, if the guest is assigned 2 GiB RAM, N should be 2359296 or greater. (BZ#1501618, BZ#1501607, JIRA:RHELPLAN-7677) Intel vGPU As a Technology Preview, it is now possible to divide a physical Intel GPU device into multiple virtual devices referred to as mediated devices . These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs share the performance of a single physical Intel GPU. Note that only selected Intel GPUs are compatible with the vGPU feature. In addition, assigning a physical GPU to VMs makes it impossible for the host to use the GPU, and may prevent graphical display output on the host from working. (BZ#1528684) Nested virtualization now available on IBM POWER 9 As a Technology Preview, it is now possible to use the nested virtualization features on RHEL 8 host machines running on IBM POWER 9 systems. Nested virtualization enables KVM virtual machines (VMs) to act as hypervisors, which allows for running VMs inside VMs. Note that nested virtualization also remains a Technology Preview on AMD64 and Intel 64 systems. Also note that for nested virtualization to work on IBM POWER 9, the host, the guest, and the nested guests currently all need to run one of the following operating systems: RHEL 8 RHEL 7 for POWER 9 (BZ#1505999, BZ#1518937) Creating nested virtual machines As a Technology Preview, nested virtualization is available for KVM virtual machines (VMs) in RHEL 8. With this feature, a VM that runs on a physical host can act as a hypervisor, and host its own VMs. Note that nested virtualization is only available on AMD64 and Intel 64 architectures, and the nested host must be a RHEL 7 or RHEL 8 VM. (JIRA:RHELPLAN-14047) 6.5.10. Containers The podman-machine command is unsupported The podman-machine command for managing virtual machines, is available only as a Technology Preview. Instead, run Podman directly from the command line. (JIRA:RHELDOCS-16861) 6.6. Deprecated functionality This part provides an overview of functionality that has been deprecated in Red Hat Enterprise Linux 8.1. Deprecated devices are fully supported, which means that they are tested and maintained, and their support status remains unchanged within Red Hat Enterprise Linux 8. However, these devices will likely not be supported in the major version release, and are not recommended for new deployments on the current or future major versions of RHEL. For the most recent list of deprecated functionality within a particular major release, see the latest version of release documentation. For information about the length of support, see Red Hat Enterprise Linux Life Cycle and Red Hat Enterprise Linux Application Streams Life Cycle . A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from the product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. For information regarding functionality that is present in RHEL 7 but has been removed in RHEL 8, see Considerations in adopting RHEL 8 . For information regarding functionality that is present in RHEL 8 but has been removed in RHEL 9, see Considerations in adopting RHEL 9 . 6.6.1. Installer and image creation Several Kickstart commands and options have been deprecated Using the following commands and options in RHEL 8 Kickstart files will print a warning in the logs. auth or authconfig device deviceprobe dmraid install lilo lilocheck mouse multipath bootloader --upgrade ignoredisk --interactive partition --active reboot --kexec Where only specific options are listed, the base command and its other options are still available and not deprecated. For more details and related changes in Kickstart, see the Kickstart changes section of the Considerations in adopting RHEL 8 document. (BZ#1642765) The --interactive option of the ignoredisk Kickstart command has been deprecated Using the --interactive option in future releases of Red Hat Enterprise Linux will result in a fatal installation error. It is recommended that you modify your Kickstart file to remove the option. (BZ#1637872) 6.6.2. Software management The rpmbuild --sign command has been deprecated With this update, the rpmbuild --sign command has become deprecated. Using this command in future releases of Red Hat Enterprise Linux can result in an error. It is recommended that you use the rpmsign command instead. ( BZ#1688849 ) 6.6.3. Security TLS 1.0 and TLS 1.1 are deprecated The TLS 1.0 and TLS 1.1 protocols are disabled in the DEFAULT system-wide cryptographic policy level. If your scenario, for example, a video conferencing application in the Firefox web browser, requires using the deprecated protocols, switch the system-wide cryptographic policy to the LEGACY level: For more information, see the Strong crypto defaults in RHEL 8 and deprecation of weak crypto algorithms Knowledgebase article on the Red Hat Customer Portal and the update-crypto-policies(8) man page. ( BZ#1660839 ) DSA is deprecated in RHEL 8 The Digital Signature Algorithm (DSA) is considered deprecated in Red Hat Enterprise Linux 8. Authentication mechanisms that depend on DSA keys do not work in the default configuration. Note that OpenSSH clients do not accept DSA host keys even in the LEGACY system-wide cryptographic policy level. (BZ#1646541) SSL2 Client Hello has been deprecated in NSS The Transport Layer Security ( TLS ) protocol version 1.2 and earlier allow to start a negotiation with a Client Hello message formatted in a way that is backward compatible with the Secure Sockets Layer ( SSL ) protocol version 2. Support for this feature in the Network Security Services ( NSS ) library has been deprecated and it is disabled by default. Applications that require support for this feature need to use the new SSL_ENABLE_V2_COMPATIBLE_HELLO API to enable it. Support for this feature may be removed completely in future releases of Red Hat Enterprise Linux 8. (BZ#1645153) TPM 1.2 is deprecated The Trusted Platform Module (TPM) secure cryptoprocessor standard version was updated to version 2.0 in 2016. TPM 2.0 provides many improvements over TPM 1.2, and it is not backward compatible with the version. TPM 1.2 is deprecated in RHEL 8, and it might be removed in the major release. (BZ#1657927) 6.6.4. Networking Network scripts are deprecated in RHEL 8 Network scripts are deprecated in Red Hat Enterprise Linux 8 and they are no longer provided by default. The basic installation provides a new version of the ifup and ifdown scripts which call the NetworkManager service through the nmcli tool. In Red Hat Enterprise Linux 8, to run the ifup and the ifdown scripts, NetworkManager must be running. Note that custom commands in /sbin/ifup-local , ifdown-pre-local and ifdown-local scripts are not executed. If any of these scripts are required, the installation of the deprecated network scripts in the system is still possible with the following command: The ifup and ifdown scripts link to the installed legacy network scripts. Calling the legacy network scripts shows a warning about their deprecation. (BZ#1647725) 6.6.5. Kernel Diskless boot has been deprecated Diskless booting allows multiple systems to share a root filesystem via the network. While convenient, it is prone to introducing network latency in realtime workloads. With a future minor update of RHEL for Real Time 8, the diskless booting will no longer be supported. ( BZ#1748980 ) The rdma_rxe Soft-RoCE driver is deprecated Software Remote Direct Memory Access over Converged Ethernet (Soft-RoCE), also known as RXE, is a feature that emulates Remote Direct Memory Access (RDMA). In RHEL 8, the Soft-RoCE feature is available as an unsupported Technology Preview. However, due to stability issues, this feature has been deprecated and will be removed in RHEL 9. (BZ#1878207) 6.6.6. Hardware enablement The qla3xxx driver is deprecated The qla3xxx driver has been deprecated in RHEL 8. The driver will likely not be supported in future major releases of this product, and thus it is not recommended for new deployments. (BZ#1658840) The dl2k , dnet , ethoc , and dlci drivers are deprecated The dl2k , dnet , ethoc , and dlci drivers have been deprecated in RHEL 8. The drivers will likely not be supported in future major releases of this product, and thus they are not recommended for new deployments. (BZ#1660627) 6.6.7. File systems and storage The elevator kernel command line parameter is deprecated The elevator kernel command line parameter was used in earlier RHEL releases to set the disk scheduler for all devices. In RHEL 8, the parameter is deprecated. The upstream Linux kernel has removed support for the elevator parameter, but it is still available in RHEL 8 for compatibility reasons. Note that the kernel selects a default disk scheduler based on the type of device. This is typically the optimal setting. If you require a different scheduler, Red Hat recommends that you use udev rules or the Tuned service to configure it. Match the selected devices and switch the scheduler only for those devices. For more information, see Setting the disk scheduler . (BZ#1665295) NFSv3 over UDP has been disabled The NFS server no longer opens or listens on a User Datagram Protocol (UDP) socket by default. This change affects only NFS version 3 because version 4 requires the Transmission Control Protocol (TCP). NFS over UDP is no longer supported in RHEL 8. (BZ#1592011) 6.6.8. Desktop The libgnome-keyring library has been deprecated The libgnome-keyring library has been deprecated in favor of the libsecret library, as libgnome-keyring is not maintained upstream, and does not follow the necessary cryptographic policies for RHEL. The new libsecret library is the replacement that follows the necessary security standards. (BZ#1607766) 6.6.9. Graphics infrastructures AGP graphics cards are no longer supported Graphics cards using the Accelerated Graphics Port (AGP) bus are not supported in Red Hat Enterprise Linux 8. Use the graphics cards with PCI-Express bus as the recommended replacement. (BZ#1569610) 6.6.10. The web console The web console no longer supports incomplete translations The RHEL web console no longer provides translations for languages that have translations available for less than 50 % of the Console's translatable strings. If the browser requests translation to such a language, the user interface will be in English instead. ( BZ#1666722 ) 6.6.11. Virtualization virt-manager has been deprecated The Virtual Machine Manager application, also known as virt-manager , has been deprecated. The RHEL 8 web console, also known as Cockpit , is intended to become its replacement in a subsequent release. It is, therefore, recommended that you use the web console for managing virtualization in a GUI. Note, however, that some features available in virt-manager may not be yet available the RHEL 8 web console. (JIRA:RHELPLAN-10304) Virtual machine snapshots are not properly supported in RHEL 8 The current mechanism of creating virtual machine (VM) snapshots has been deprecated, as it is not working reliably. As a consequence, it is recommended not to use VM snapshots in RHEL 8. Note that a new VM snapshot mechanism is under development and will be fully implemented in a future minor release of RHEL 8. ( BZ#1686057 ) The Cirrus VGA virtual GPU type has been deprecated With a future major update of Red Hat Enterprise Linux, the Cirrus VGA GPU device will no longer be supported in KVM virtual machines. Therefore, Red Hat recommends using the stdvga , virtio-vga , or qxl devices instead of Cirrus VGA. (BZ#1651994) 6.6.12. Deprecated packages The following packages have been deprecated and will probably not be included in a future major release of Red Hat Enterprise Linux: 389-ds-base-legacy-tools authd custodia hostname libidn net-tools network-scripts nss-pam-ldapd sendmail yp-tools ypbind ypserv 6.7. Known issues This part describes known issues in Red Hat Enterprise Linux 8. 6.7.1. Installer and image creation The auth and authconfig Kickstart commands require the AppStream repository The authselect-compat package is required by the auth and authconfig Kickstart commands during installation. Without this package, the installation fails if auth or authconfig are used. However, by design, the authselect-compat package is only available in the AppStream repository. To work around this problem, verify that the BaseOS and AppStream repositories are available to the installer or use the authselect Kickstart command during installation. (BZ#1640697) The reboot --kexec and inst.kexec commands do not provide a predictable system state Performing a RHEL installation with the reboot --kexec Kickstart command or the inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results. Note that the kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux. (BZ#1697896) Anaconda installation includes low limits of minimal resources setting requirements Anaconda initiates the installation on systems with minimal resource settings required available and do not provide message warning about the required resources for performing the installation successfully. As a result, the installation can fail and the output errors do not provide clear messages for possible debug and recovery. To work around this problem, make sure that the system has the minimal resources settings required for installation: 2GB memory on PPC64(LE) and 1GB on x86_64. As a result, it should be possible to perform a successful installation. (BZ#1696609) Installation fails when using the reboot --kexec command The RHEL 8 installation fails when using a Kickstart file that contains the reboot --kexec command. To avoid the problem, use the reboot command instead of reboot --kexec in your Kickstart file. ( BZ#1672405 ) Support secure boot for s390x in the installer RHEL 8.1 provides support for preparing boot disks for use in IBM Z environments that enforce the use of secure boot. The capabilities of the server and Hypervisor used during installation determine if the resulting on-disk format contains secure boot support or not. There is no way to influence the on-disk format during installation. Consequently, if you install RHEL 8.1 in an environment that supports secure boot, the system is unable to boot when moved to an environment lacking secure boot support, as it is done in some fail-over scenarios. To work around this problem, you need to configure the zipl tool that controls the on-disk boot format. zipl can be configured to write the on-disk format even if the environment in which it is run supports secure boot. Perform the following manual steps as root user once the installation of RHEL 8.1 is completed: Edit the configuration file /etc/zipl.conf Add a line containing "secure=0" to the section labelled "defaultboot". Run the zipl tool without parameters After performing these steps, the on-disk format of the RHEL 8.1 boot disk will no longer contain secure boot support. As a result, the installation can be booted in environments that lack secure boot support. (BZ#1659400) RHEL 8 initial setup cannot be performed via SSH Currently, the RHEL 8 initial setup interface does not display when logged in to the system using SSH. As a consequence, it is impossible to perform the initial setup on a RHEL 8 machine managed via SSH. To work around this problem, perform the initial setup in the main system console (ttyS0) and, afterwards, log in using SSH. (BZ#1676439) The default value for the secure= boot option is not set to auto Currently, the default value for the secure= boot option is not set to auto. As a consequence, the secure boot feature is not available because the current default is disabled. To work around this problem, manually set secure=auto in the [defaultboot] section of the /etc/zipl.conf file. As a result, the secure boot feature is made available. For more information, see the zipl.conf man page. (BZ#1750326) Copying the content of the Binary DVD.iso file to a partition omits the .treeinfo and .discinfo files During local installation, while copying the content of the RHEL 8 Binary DVD.iso image file to a partition, the * in the cp <path>/\* <mounted partition>/dir command fails to copy the .treeinfo and .discinfo files. These files are required for a successful installation. As a result, the BaseOS and AppStream repositories are not loaded, and a debug-related log message in the anaconda.log file is the only record of the problem. To work around the problem, copy the missing .treeinfo and .discinfo files to the partition. (BZ#1687747) Self-signed HTTPS server cannot be used in Kickstart installation Currently, the installer fails to install from a self-signed https server when the installation source is specified in the kickstart file and the --noverifyssl option is used: To work around this problem, append the inst.noverifyssl parameter to the kernel command line when starting the kickstart installation. For example: (BZ#1745064) 6.7.2. Software management yum repolist ends on first unavailable repository with skip_if_unavailable=false The repository configuration option skip_if_unavailable is by default set as follows: This setting forces the yum repolist command to end on first unavailable repository with an error and exit status 1. Consequently, yum repolist does not continue listing available repositiories. Note that it is possible to override this setting in each repository's *.repo file. However, if you want to keep the default settings, you can work around the problem by using yum repolist with the following option: (BZ#1697472) 6.7.3. Subscription management syspurpose addons have no effect on the subscription-manager attach --auto output. In Red Hat Enterprise Linux 8, four attributes of the syspurpose command-line tool have been added: role , usage , service_level_agreement and addons . Currently, only role , usage and service_level_agreement affect the output of running the subscription-manager attach --auto command. Users who attempt to set values to the addons argument will not observe any effect on the subscriptions that are auto-attached. (BZ#1687900) 6.7.4. Shells and command-line tools Applications using Wayland protocol cannot be forwarded to remote display servers In Red Hat Enterprise Linux 8.1, most applications use the Wayland protocol by default instead of the X11 protocol. As a consequence, the ssh server cannot forward the applications that use the Wayland protocol but is able to forward the applications that use the X11 protocol to a remote display server. To work around this problem, set the environment variable GDK_BACKEND=x11 before starting the applications. As a result, the application can be forwarded to remote display servers. ( BZ#1686892 ) systemd-resolved.service fails to start on boot The systemd-resolved service occasionally fails to start on boot. If this happens, restart the service manually after the boot finishes by using the following command: However, the failure of systemd-resolved on boot does not impact any other services. (BZ#1640802) 6.7.5. Infrastructure services Support for DNSSEC in dnsmasq The dnsmasq package introduces Domain Name System Security Extensions (DNSSEC) support for verifying hostname information received from root servers. Note that DNSSEC validation in dnsmasq is not compliant with FIPS 140-2. Do not enable DNSSEC in dnsmasq on Federal Information Processing Standard (FIPS) systems, and use the compliant validating resolver as a forwarder on the localhost. (BZ#1549507) 6.7.6. Security redhat-support-tool does not work with the FUTURE crypto policy Because a cryptographic key used by a certificate on the Customer Portal API does not meet the requirements by the FUTURE system-wide cryptographic policy, the redhat-support-tool utility does not work with this policy level at the moment. To work around this problem, use the DEFAULT crypto policy while connecting to the Customer Portal API. ( BZ#1802026 ) SELINUX=disabled in /etc/selinux/config does not work properly Disabling SELinux using the SELINUX=disabled option in the /etc/selinux/config results in a process in which the kernel boots with SELinux enabled and switches to disabled mode later in the boot process. This might cause memory leaks and race conditions and consequently also kernel panics. To work around this problem, disable SELinux by adding the selinux=0 parameter to the kernel command line as described in the Changing SELinux modes at boot time section of the Using SELinux title if your scenario really requires to completely disable SELinux. (JIRA:RHELPLAN-34199) libselinux-python is available only through its module The libselinux-python package contains only Python 2 bindings for developing SELinux applications and it is used for backward compatibility. For this reason, libselinux-python is no longer available in the default RHEL 8 repositories through the dnf install libselinux-python command. To work around this problem, enable both the libselinux-python and python27 modules, and install the libselinux-python package and its dependencies with the following commands: Alternatively, install libselinux-python using its install profile with a single command: As a result, you can install libselinux-python using the respective module. (BZ#1666328) udica processes UBI 8 containers only when started with --env container=podman The Red Hat Universal Base Image 8 (UBI 8) containers set the container environment variable to the oci value instead of the podman value. This prevents the udica tool from analyzing a container JavaScript Object Notation (JSON) file. To work around this problem, start a UBI 8 container using a podman command with the --env container=podman parameter. As a result, udica can generate an SELinux policy for a UBI 8 container only when you use the described workaround. ( BZ#1763210 ) Removing the rpm-plugin-selinux package leads to removing all selinux-policy packages from the system Removing the rpm-plugin-selinux package disables SELinux on the machine. It also removes all selinux-policy packages from the system. Repeated installation of the rpm-plugin-selinux package then installs the selinux-policy-minimum SELinux policy, even if the selinux-policy-targeted policy was previously present on the system. However, the repeated installation does not update the SELinux configuration file to account for the change in policy. As a consequence, SELinux is disabled even upon reinstallation of the rpm-plugin-selinux package. To work around this problem: Enter the umount /sys/fs/selinux/ command. Manually install the missing selinux-policy-targeted package. Edit the /etc/selinux/config file so that the policy is equal to SELINUX=enforcing . Enter the command load_policy -i . As a result, SELinux is enabled and running the same policy as before. (BZ#1641631) SELinux prevents systemd-journal-gatewayd to call newfstatat() on shared memory files created by corosync SELinux policy does not contain a rule that allows the systemd-journal-gatewayd daemon to access files created by the corosync service. As a consequence, SELinux denies systemd-journal-gatewayd to call the newfstatat() function on shared memory files created by corosync . To work around this problem, create a local policy module with an allow rule which enables the described scenario. See the audit2allow(1) man page for more information on generating SELinux policy allow and dontaudit rules. As a result of the workaround, systemd-journal-gatewayd can call the function on shared memory files created by corosync with SELinux in enforcing mode. (BZ#1746398) Negative effects of the default logging setup on performance The default logging environment setup might consume 4 GB of memory or even more and adjustments of rate-limit values are complex when systemd-journald is running with rsyslog . See the Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article for more information. (JIRA:RHELPLAN-10431) Parameter not known errors in the rsyslog output with config.enabled In the rsyslog output, an unexpected bug occurs in configuration processing errors using the config.enabled directive. As a consequence, parameter not known errors are displayed while using the config.enabled directive except for the include() statements. To work around this problem, set config.enabled=on or use include() statements. (BZ#1659383) Certain rsyslog priority strings do not work correctly Support for the GnuTLS priority string for imtcp that allows fine-grained control over encryption is not complete. Consequently, the following priority strings do not work properly in rsyslog : To work around this problem, use only correctly working priority strings: As a result, current configurations must be limited to the strings that work correctly. ( BZ#1679512 ) Connections to servers with SHA-1 signatures do not work with GnuTLS SHA-1 signatures in certificates are rejected by the GnuTLS secure communications library as insecure. Consequently, applications that use GnuTLS as a TLS backend cannot establish a TLS connection to peers that offer such certificates. This behavior is inconsistent with other system cryptographic libraries. To work around this problem, upgrade the server to use certificates signed with SHA-256 or stronger hash, or switch to the LEGACY policy. (BZ#1628553) TLS 1.3 does not work in NSS in FIPS mode TLS 1.3 is not supported on systems working in FIPS mode. As a result, connections that require TLS 1.3 for interoperability do not function on a system working in FIPS mode. To enable the connections, disable the system's FIPS mode or enable support for TLS 1.2 in the peer. ( BZ#1724250 ) OpenSSL incorrectly handles PKCS #11 tokens that does not support raw RSA or RSA-PSS signatures The OpenSSL library does not detect key-related capabilities of PKCS #11 tokens. Consequently, establishing a TLS connection fails when a signature is created with a token that does not support raw RSA or RSA-PSS signatures. To work around the problem, add the following lines after the .include line at the end of the crypto_policy section in the /etc/pki/tls/openssl.cnf file: As a result, a TLS connection can be established in the described scenario. ( BZ#1685470 ) The OpenSSL TLS library does not detect if the PKCS#11 token supports creation of raw RSA or RSA-PSS signatures The TLS-1.3 protocol requires the support for RSA-PSS signature. If the PKCS#11 token does not support raw RSA or RSA-PSS signatures, the server applications which use OpenSSL TLS library will fail to work with the RSA key if it is held by the PKCS#11 token. As a result, TLS communication will fail. To work around this problem, configure server or client to use the TLS-1.2 version as the highest TLS protocol version available. ( BZ#1681178 ) OpenSSL generates a malformed status_request extension in the CertificateRequest message in TLS 1.3 OpenSSL servers send a malformed status_request extension in the CertificateRequest message if support for the status_request extension and client certificate-based authentication are enabled. In such case, OpenSSL does not interoperate with implementations compliant with the RFC 8446 protocol. As a result, clients that properly verify extensions in the 'CertificateRequest' message abort connections with the OpenSSL server. To work around this problem, disable support for the TLS 1.3 protocol on either side of the connection or disable support for status_request on the OpenSSL server. This will prevent the server from sending malformed messages. ( BZ#1749068 ) ssh-keyscan cannot retrieve RSA keys of servers in FIPS mode The SHA-1 algorithm is disabled for RSA signatures in FIPS mode, which prevents the ssh-keyscan utility from retrieving RSA keys of servers operating in that mode. To work around this problem, use ECDSA keys instead, or retrieve the keys locally from the /etc/ssh/ssh_host_rsa_key.pub file on the server. ( BZ#1744108 ) scap-security-guide PCI-DSS remediation of Audit rules does not work properly The scap-security-guide package contains a combination of remediation and a check that can result in one of the following scenarios: incorrect remediation of Audit rules scan evaluation containing false positives where passed rules are marked as failed Consequently, during the RHEL 8.1 installation process, scanning of the installed system reports some Audit rules as either failed or errored. To work around this problem, follow the instructions in the RHEL-8.1 workaround for remediating and scanning with the scap-security-guide PCI-DSS profile Knowledgebase article. ( BZ#1754919 ) Certain sets of interdependent rules in SSG can fail Remediation of SCAP Security Guide (SSG) rules in a benchmark can fail due to undefined ordering of rules and their dependencies. If two or more rules need to be executed in a particular order, for example, when one rule installs a component and another rule configures the same component, they can run in the wrong order and remediation reports an error. To work around this problem, run the remediation twice, and the second run fixes the dependent rules. ( BZ#1750755 ) A utility for security and compliance scanning of containers is not available In Red Hat Enterprise Linux 7, the oscap-docker utility can be used for scanning of Docker containers based on Atomic technologies. In Red Hat Enterprise Linux 8, the Docker- and Atomic-related OpenSCAP commands are not available. To work around this problem, see the Using OpenSCAP for scanning containers in RHEL 8 article on the Customer Portal. As a result, you can use only an unsupported and limited way for security and compliance scanning of containers in RHEL 8 at the moment. (BZ#1642373) OpenSCAP does not provide offline scanning of virtual machines and containers Refactoring of OpenSCAP codebase caused certain RPM probes to fail to scan VM and containers file systems in offline mode. For that reason, the following tools were removed from the openscap-utils package: oscap-vm and oscap-chroot . Also, the openscap-containers package was completely removed. (BZ#1618489) OpenSCAP rpmverifypackage does not work correctly The chdir and chroot system calls are called twice by the rpmverifypackage probe. Consequently, an error occurs when the probe is utilized during an OpenSCAP scan with custom Open Vulnerability and Assessment Language (OVAL) content. To work around this problem, do not use the rpmverifypackage_test OVAL test in your content or use only the content from the scap-security-guide package where rpmverifypackage_test is not used. (BZ#1646197) SCAP Workbench fails to generate results-based remediations from tailored profiles The following error occurs when trying to generate results-based remediation roles from a customized profile using the SCAP Workbench tool: To work around this problem, use the oscap command with the --tailoring-file option. (BZ#1640715) OSCAP Anaconda Addon does not install all packages in text mode The OSCAP Anaconda Addon plugin cannot modify the list of packages selected for installation by the system installer if the installation is running in text mode. Consequently, when a security policy profile is specified using Kickstart and the installation is running in text mode, any additional packages required by the security policy are not installed during installation. To work around this problem, either run the installation in graphical mode or specify all packages that are required by the security policy profile in the security policy in the %packages section in your Kickstart file. As a result, packages that are required by the security policy profile are not installed during RHEL installation without one of the described workarounds, and the installed system is not compliant with the given security policy profile. ( BZ#1674001 ) OSCAP Anaconda Addon does not correctly handle customized profiles The OSCAP Anaconda Addon plugin does not properly handle security profiles with customizations in separate files. Consequently, the customized profile is not available in the RHEL graphical installation even when you properly specify it in the corresponding Kickstart section. To work around this problem, follow the instructions in the Creating a single SCAP data stream from an original DS and a tailoring file Knowledgebase article. As a result of this workaround, you can use a customized SCAP profile in the RHEL graphical installation. (BZ#1691305) 6.7.7. Networking The formatting of the verbose output of arptables now matches the format of the utility on RHEL 7 In RHEL 8, the iptables-arptables package provides an nftables -based replacement of the arptables utility. Previously, the verbose output of arptables separated counter values only with a comma, while arptables on RHEL 7 separated the described output with both a space and a comma. As a consequence, if you used scripts created on RHEL 7 that parsed the output of the arptables -v -L command, you had to adjust these scripts. This incompatibility has been fixed. As a result, arptables on RHEL 8.1 now also separates counter values with both a space and a comma. (BZ#1676968) nftables does not support multi-dimensional IP set types The nftables packet-filtering framework does not support set types with concatenations and intervals. Consequently, you cannot use multi-dimensional IP set types, such as hash:net,port , with nftables . To work around this problem, use the iptables framework with the ipset tool if you require multi-dimensional IP set types. (BZ#1593711) IPsec network traffic fails during IPsec offloading when GRO is disabled IPsec offloading is not expected to work when Generic Receive Offload (GRO) is disabled on the device. If IPsec offloading is configured on a network interface and GRO is disabled on that device, IPsec network traffic fails. To work around this problem, keep GRO enabled on the device. (BZ#1649647) 6.7.8. Kernel The i40iw module does not load automatically on boot Due to many i40e NICs not supporting iWarp and the i40iw module not fully supporting suspend/resume, this module is not automatically loaded by default to ensure suspend/resume works properly. To work around this problem, manually edit the /lib/udev/rules.d/90-rdma-hw-modules.rules file to enable automated load of i40iw . Also note that if there is another RDMA device installed with a i40e device on the same machine, the non-i40e RDMA device triggers the rdma service, which loads all enabled RDMA stack modules, including the i40iw module. (BZ#1623712) Network interface is renamed to kdump-<interface-name> when fadump is used When firmware-assisted dump ( fadump ) is utilized to capture a vmcore and store it to a remote machine using SSH or NFS protocol, the network interface is renamed to kdump-<interface-name> if <interface-name> is generic, for example, *eth#, or net#. This problem occurs because the vmcore capture scripts in the initial RAM disk ( initrd ) add the kdump- prefix to the network interface name to secure persistent naming. The same initrd is used also for a regular boot, so the interface name is changed for the production kernel too. (BZ#1745507) Systems with a large amount of persistent memory experience delays during the boot process Systems with a large amount of persistent memory take a long time to boot because the initialization of the memory is serialized. Consequently, if there are persistent memory file systems listed in the /etc/fstab file, the system might timeout while waiting for devices to become available. To work around this problem, configure the DefaultTimeoutStartSec option in the /etc/systemd/system.conf file to a sufficiently large value. (BZ#1666538) KSM sometimes ignores NUMA memory policies When the kernel shared memory (KSM) feature is enabled with the merge_across_nodes=1 parameter, KSM ignores memory policies set by the mbind() function, and may merge pages from some memory areas to Non-Uniform Memory Access (NUMA) nodes that do not match the policies. To work around this problem, disable KSM or set the merge_across_nodes parameter to 0 if using NUMA memory binding with QEMU. As a result, NUMA memory policies configured for the KVM VM will work as expected. (BZ#1153521) The system enters the emergency mode at boot-time when fadump is enabled The system enters the emergency mode when fadump ( kdump ) or dracut squash module is enabled in the initramfs scheme because systemd manager fails to fetch the mount information and configure the LV partition to mount. To work around this problem, add the following kernel command line parameter rd.lvm.lv=<VG>/<LV> to discover and mount the failed LV partition appropriately. As a result, the system will boot successfully in the described scenario. (BZ#1750278) Using irqpoll in the kdump kernel command line causes a vmcore generation failure Due to an existing underlying problem with the nvme driver on the 64-bit ARM architectures running on the Amazon Web Services (AWS) cloud platforms, the vmcore generation fails if the irqpoll kdump command line argument is provided to the first kernel. Consequently, no vmcore is dumped in the /var/crash/ directory after a kernel crash. To work around this problem: Add irqpoll to the KDUMP_COMMANDLINE_REMOVE key in the /etc/sysconfig/kdump file. Restart the kdump service by running the systemctl restart kdump command. As a result, the first kernel correctly boots and the vmcore is expected to be captured upon the kernel crash. (BZ#1654962) Debug kernel fails to boot in crash capture environment in RHEL 8 Due to memory-demanding nature of the debug kernel, a problem occurs when the debug kernel is in use and a kernel panic is triggered. As a consequence, the debug kernel is not able to boot as the capture kernel, and a stack trace is generated instead. To work around this problem, increase the crash kernel memory accordingly. As a result, the debug kernel successfully boots in the crash capture environment. (BZ#1659609) softirq changes can cause the localhost interface to drop UDP packets when under heavy load Changes in the Linux kernel's software interrupt ( softirq ) handling are done to reduce denial of service (DOS) effects. Consequently, this leads to situations where the localhost interface drops User Datagram Protocol (UDP) packets under heavy load. To work around this problem, increase the size of the network device backlog buffer to value 6000: In Red Hat tests, this value was sufficient to prevent packet loss. More heavily loaded systems might require larger backlog values. Increased backlogs have the effect of potentially increasing latency on the localhost interface. The result is to increase the buffer and allow more packets to be waiting for processing, which reduces the chances of dropping localhost packets. (BZ#1779337) 6.7.9. Hardware enablement The HP NMI watchdog in some cases does not generate a crash dump The hpwdt driver for the HP NMI watchdog is sometimes not able to claim a non-maskable interrupt (NMI) generated by the HPE watchdog timer because the NMI was instead consumed by the perfmon driver. As a consequence, hpwdt in some cases cannot call a panic to generate a crash dump. (BZ#1602962) Installing RHEL 8.1 on a test system configured with a QL41000 card results in a kernel panic While installing RHEL 8.1 on a test system configured with a QL41000 card, the system is unable to handle the kernel NULL pointer dereference at 000000000000003c card. As a consequence, it results in a kernel panic error. There is no work around available for this issue. (BZ#1743456) The cxgb4 driver causes crash in the kdump kernel The kdump kernel crashes while trying to save information in the vmcore file. Consequently, the cxgb4 driver prevents the kdump kernel from saving a core for later analysis. To work around this problem, add the "novmcoredd" parameter to the kdump kernel command line to allow saving core files. (BZ#1708456) 6.7.10. File systems and storage Certain SCSI drivers might sometimes use an excessive amount of memory Certain SCSI drivers use a larger amount of memory than in RHEL 7. In certain cases, such as vPort creation on a Fibre Channel host bus adapter (HBA), the memory usage might be excessive, depending upon the system configuration. The increased memory usage is caused by memory preallocation in the block layer. Both the multiqueue block device scheduling (BLK-MQ) and the multiqueue SCSI stack (SCSI-MQ) preallocate memory for each I/O request in RHEL 8, leading to the increased memory usage. (BZ#1698297) VDO cannot suspend until UDS has finished rebuilding When a Virtual Data Optimizer (VDO) volume starts after an unclean system shutdown, it rebuilds the Universal Deduplication Service (UDS) index. If you try to suspend the VDO volume using the dmsetup suspend command while the UDS index is rebuilding, the suspend command might become unresponsive. The command finishes only after the rebuild is done. The unresponsiveness is noticeable only with VDO volumes that have a large UDS index, which causes the rebuild to take a longer time. ( BZ#1737639 ) An NFS 4.0 patch can result in reduced performance under an open-heavy workload Previously, a bug was fixed that, in some cases, could cause an NFS open operation to overlook the fact that a file had been removed or renamed on the server. However, the fix may cause slower performance with workloads that require many open operations. To work around this problem, it might help to use NFS version 4.1 or higher, which have been improved to grant delegations to clients in more cases, allowing clients to perform open operations locally, quickly, and safely. (BZ#1748451) 6.7.11. Dynamic programming languages, web and database servers nginx cannot load server certificates from hardware security tokens The nginx web server supports loading TLS private keys from hardware security tokens directly from PKCS#11 modules. However, it is currently impossible to load server certificates from hardware security tokens through the PKCS#11 URI. To work around this problem, store server certificates on the file system ( BZ#1668717 ) php-fpm causes SELinux AVC denials when php-opcache is installed with PHP 7.2 When the php-opcache package is installed, the FastCGI Process Manager ( php-fpm ) causes SELinux AVC denials. To work around this problem, change the default configuration in the /etc/php.d/10-opcache.ini file to the following: Note that this problem affects only the php:7.2 stream, not the php:7.3 one. ( BZ#1670386 ) 6.7.12. Compilers and development tools The ltrace tool does not report function calls Because of improvements to binary hardening applied to all RHEL components, the ltrace tool can no longer detect function calls in binary files coming from RHEL components. As a consequence, ltrace output is empty because it does not report any detected calls when used on such binary files. There is no workaround currently available. As a note, ltrace can correctly report calls in custom binary files built without the respective hardening flags. (BZ#1618748) 6.7.13. Identity Management AD users with expired accounts can be allowed to log in when using GSSAPI authentication The accountExpires attribute that SSSD uses to see whether an account has expired is not replicated to the global catalog by default. As a result, users with expired accounts can log in when using GSSAPI authentication. To work around this problem, the global catalog support can be disabled by specifying ad_enable_gc=False in the sssd.conf file. With this setting, users with expired accounts will be denied access when using GSSAPI authentication. Note that SSSD connects to each LDAP server individually in this scenario, which can increase the connection count. (BZ#1081046) Using the cert-fix utility with the --agent-uid pkidbuser option breaks Certificate System Using the cert-fix utility with the --agent-uid pkidbuser option corrupts the LDAP configuration of Certificate System. As a consequence, Certificate System might become unstable and manual steps are required to recover the system. ( BZ#1729215 ) Changing /etc/nsswitch.conf requires a manual system reboot Any change to the /etc/nsswitch.conf file, for example running the authselect select profile_id command, requires a system reboot so that all relevant processes use the updated version of the /etc/nsswitch.conf file. If a system reboot is not possible, restart the service that joins your system to Active Directory, which is the System Security Services Daemon (SSSD) or winbind . ( BZ#1657295 ) No information about required DNS records displayed when enabling support for AD trust in IdM When enabling support for Active Directory (AD) trust in Red Hat Enterprise Linux Identity Management (IdM) installation with external DNS management, no information about required DNS records is displayed. Forest trust to AD is not successful until the required DNS records are added. To work around this problem, run the 'ipa dns-update-system-records --dry-run' command to obtain a list of all DNS records required by IdM. When external DNS for IdM domain defines the required DNS records, establishing forest trust to AD is possible. ( BZ#1665051 ) SSSD returns incorrect LDAP group membership for local users If the System Security Services Daemon (SSSD) serves users from the local files, the files provider does not include group memberships from other domains. As a consequence, if a local user is a member of an LDAP group, the id local_user command does not return the user's LDAP group membership. To work around the problem, either revert the order of the databases where the system is looking up the group membership of users in the /etc/nsswitch.conf file, replacing sss files with files sss , or disable the implicit files domain by adding to the [sssd] section in the /etc/sssd/sssd.conf file. As a result, id local_user returns correct LDAP group membership for local users. ( BZ#1652562 ) Default PAM settings for systemd-user have changed in RHEL 8 which may influence SSSD behavior The Pluggable authentication modules (PAM) stack has changed in Red Hat Enterprise Linux 8. For example, the systemd user session now starts a PAM conversation using the systemd-user PAM service. This service now recursively includes the system-auth PAM service, which may include the pam_sss.so interface. This means that the SSSD access control is always called. Be aware of the change when designing access control rules for RHEL 8 systems. For example, you can add the systemd-user service to the allowed services list. Please note that for some access control mechanisms, such as IPA HBAC or AD GPOs, the systemd-user service is has been added to the allowed services list by default and you do not need to take any action. ( BZ#1669407 ) SSSD does not correctly handle multiple certificate matching rules with the same priority If a given certificate matches multiple certificate matching rules with the same priority, the System Security Services Daemon (SSSD) uses only one of the rules. As a workaround, use a single certificate matching rule whose LDAP filter consists of the filters of the individual rules concatenated with the | (or) operator. For examples of certificate matching rules, see the sss-certamp(5) man page. (BZ#1447945) Private groups fail to be created with auto_private_group = hybrid when multiple domains are defined Private groups fail to be created with the option auto_private_group = hybrid when multiple domains are defined and the hybrid option is used by any domain other than the first one. If an implicit files domain is defined along with an AD or LDAP domain in the sssd.conf`file and is not marked as `MPG_HYBRID , then SSSD fails to create a private group for a user who has uid=gid and the group with this gid does not exist in AD or LDAP. The sssd_nss responder checks for the value of the auto_private_groups option in the first domain only. As a consequence, in setups where multiple domains are configured, which includes the default setup on RHEL 8, the option auto_private_group has no effect. To work around this problem, set enable_files_domain = false in the sssd section of of sssd.conf . As a result, If the enable_files_domain option is set to false, then sssd does not add a domain with id_provider=files at the start of the list of active domains, and therefore this bug does not occur. (BZ#1754871) python-ply is not FIPS compatible The YACC module of the python-ply package uses the MD5 hashing algorithm to generate the fingerprint of a YACC signature. However, FIPS mode blocks the use of MD5, which is only allowed in non-security contexts. As a consequence, python-ply is not FIPS compatible. On a system in FIPS mode, all calls to ply.yacc.yacc() fail with the error message: The problem affects python-pycparser and some use cases of python-cffi . To work around this problem, modify the line 2966 of the file /usr/lib/python3.6/site-packages/ply/yacc.py , replacing sig = md5() with sig = md5(usedforsecurity=False) . As a result, python-ply can be used in FIPS mode. ( BZ#1747490 ) SSSD retrieves incomplete list of members if the group size exceeds 1500 members During the integration of SSSD with Active Directory, SSSD retrieves incomplete group member lists when the group size exceeds 1500 members. This issue occurs because Active Directory's MaxValRange policy, which restricts the number of members retrievable in a single query, is set to 1500 by default. To work around this problem, change the MaxValRange setting in Active Directory to accommodate larger group sizes. (JIRA:RHELDOCS-19603) 6.7.14. Desktop Limitations of the Wayland session With Red Hat Enterprise Linux 8, the GNOME environment and the GNOME Display Manager (GDM) use Wayland as the default session type instead of the X11 session, which was used with the major version of RHEL. The following features are currently unavailable or do not work as expected under Wayland : Multi-GPU setups are not supported under Wayland . X11 configuration utilities, such as xrandr , do not work under Wayland due to its different approach to handling, resolutions, rotations, and layout. You can configure the display features using GNOME settings. Screen recording and remote desktop require applications to support the portal API on Wayland . Certain legacy applications do not support the portal API. Pointer accessibility is not available on Wayland . No clipboard manager is available. GNOME Shell on Wayland ignores keyboard grabs issued by most legacy X11 applications. You can enable an X11 application to issue keyboard grabs using the /org/gnome/mutter/wayland/xwayland-grab-access-rules GSettings key. By default, GNOME Shell on Wayland enables the following applications to issue keyboard grabs: GNOME Boxes Vinagre Xephyr virt-manager , virt-viewer , and remote-viewer vncviewer Wayland inside guest virtual machines (VMs) has stability and performance problems. RHEL automatically falls back to the X11 session when running in a VM. If you upgrade to RHEL 8 from a RHEL 7 system where you used the X11 GNOME session, your system continues to use X11 . The system also automatically falls back to X11 when the following graphics drivers are in use: The proprietary NVIDIA driver The cirrus driver The mga driver The aspeed driver You can disable the use of Wayland manually: To disable Wayland in GDM, set the WaylandEnable=false option in the /etc/gdm/custom.conf file. To disable Wayland in the GNOME session, select the legacy X11 option by using the cogwheel menu on the login screen after entering your login name. For more details on Wayland , see https://wayland.freedesktop.org/ . ( BZ#1797409 ) Drag-and-drop does not work between desktop and applications Due to a bug in the gnome-shell-extensions package, the drag-and-drop functionality does not currently work between desktop and applications. Support for this feature will be added back in a future release. ( BZ#1717947 ) Disabling flatpak repositories from Software Repositories is not possible Currently, it is not possible to disable or remove flatpak repositories in the Software Repositories tool in the GNOME Software utility. ( BZ#1668760 ) Generation 2 RHEL 8 virtual machines sometimes fail to boot on Hyper-V Server 2016 hosts When using RHEL 8 as the guest operating system on a virtual machine (VM) running on a Microsoft Hyper-V Server 2016 host, the VM in some cases fails to boot and returns to the GRUB boot menu. In addition, the following error is logged in the Hyper-V event log: This error occurs due to a UEFI firmware bug on the Hyper-V host. To work around this problem, use Hyper-V Server 2019 as the host. (BZ#1583445) GNOME Shell on Wayland performs slowly when using a software renderer When using a software renderer, GNOME Shell as a Wayland compositor ( GNOME Shell on Wayland ) does not use a cacheable framebuffer for rendering the screen. Consequently, GNOME Shell on Wayland is slow. To workaround the problem, go to the GNOME Display Manager (GDM) login screen and switch to a session that uses the X11 protocol instead. As a result, the Xorg display server, which uses cacheable memory, is used, and GNOME Shell on Xorg in the described situation performs faster compared to GNOME Shell on Wayland . (BZ#1737553) System crash may result in fadump configuration loss This issue is observed on systems where firmware-assisted dump (fadump) is enabled, and the boot partition is located on a journaling file system such as XFS. A system crash might cause the boot loader to load an older initrd that does not have the dump capturing support enabled. Consequently, after recovery, the system does not capture the vmcore file, which results in fadump configuration loss. To work around this problem: If /boot is a separate partition, perform the following: Restart the kdump service Run the following commands as the root user, or using a user account with CAP_SYS_ADMIN rights: If /boot is not a separate partition, reboot the system. (BZ#1723501) Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. (JIRA:RHELPLAN-155168) 6.7.15. Graphics infrastructures radeon fails to reset hardware correctly The radeon kernel driver currently does not reset hardware in the kexec context correctly. Instead, radeon falls over, which causes the rest of the kdump service to fail. To work around this problem, blacklist radeon in kdump by adding the following line to the /etc/kdump.conf file: Restart the machine and kdump . After starting kdump , the force_rebuild 1 line may be removed from the configuration file. Note that in this scenario, no graphics will be available during kdump , but kdump will work successfully. (BZ#1694705) 6.7.16. The web console Unprivileged users can access the Subscriptions page If a non-administrator navigates to the Subscriptions page of the web console, the web console displays a generic error message "Cockpit had an unexpected internal error". To work around this problem, sign in to the web console with a privileged user and make sure to check the Reuse my password for privileged tasks checkbox. ( BZ#1674337 ) 6.7.17. Virtualization Using cloud-init to provision virtual machines on Microsoft Azure fails Currently, it is not possible to use the cloud-init utility to provision a RHEL 8 virtual machine (VM) on the Microsoft Azure platform. To work around this problem, use one of the following methods: Use the WALinuxAgent package instead of cloud-init to provision VMs on Microsoft Azure. Add the following setting to the [main] section in the /etc/NetworkManager/NetworkManager.conf file: (BZ#1641190) RHEL 8 virtual machines on RHEL 7 hosts in some cases cannot be viewed in higher resolution than 1920x1200 Currently, when using a RHEL 8 virtual machine (VM) running on a RHEL 7 host system, certain methods of displaying the the graphical output of the VM, such as running the application in kiosk mode, cannot use greater resolution than 1920x1200. As a consequence, displaying VMs using those methods only works in resolutions up to 1920x1200, even if the host hardware supports higher resolutions. (BZ#1635295) Low GUI display performance in RHEL 8 virtual machines on a Windows Server 2019 host When using RHEL 8 as a guest operating system in graphical mode on a Windows Server 2019 host, the GUI display performance is low, and connecting to a console output of the guest currently takes significantly longer than expected. This is a known issue on Windows 2019 hosts and is pending a fix by Microsoft. To work around this problem, connect to the guest using SSH or use Windows Server 2016 as the host. (BZ#1706541) Installing RHEL virtual machines sometimes fails Under certain circumstances, RHEL 7 and RHEL 8 virtual machines created using the virt-install utility fail to boot if the --location option is used. To work around this problem, use the --extra-args option instead and specify an installation tree reachable by the network, for example: This ensures that the RHEL installer finds the installation files correctly. (BZ#1677019) Displaying multiple monitors of virtual machines that use Wayland is not possible with QXL Using the remote-viewer utility to display more than one monitor of a virtual machine (VM) that is using the Wayland display server causes the VM to become unresponsive and the Waiting for display status message to be displayed indefinitely. To work around this problem, use virtio-gpu instead of qxl as the GPU device for VMs that use Wayland. (BZ#1642887) virsh iface-\* commands do not work consistently Currently, virsh iface-* commands, such as virsh iface-start and virsh iface-destroy , frequently fail due to configuration dependencies. Therefore, it is recommended not to use virsh iface-\* commands for configuring and managing host network connections. Instead, use the NetworkManager program and its related management applications. (BZ#1664592) Customizing an ESXi VM using cloud-init and rebooting the VM causes IP setting loss and makes booting the VM very slow Currently, if the cloud-init service is used to modify a virtual machine (VM) that runs on the VMware ESXi hypervisor to use static IP and the VM is then cloned, the new cloned VM in some cases takes a very long time to reboot. This is caused cloud-init rewriting the VM's static IP to DHCP and then searching for an available datasource. To work around this problem, you can uninstall cloud-init after the VM is booted for the first time. As a result, the subsequent reboots will not be slowed down. (BZ#1666961, BZ#1706482 ) RHEL 8 virtual machines sometimes cannot boot on Witherspoon hosts RHEL 8 virtual machines (VMs) that use the pseries-rhel7.6.0-sxxm machine type in some cases fail to boot on Power9 S922LC for HPC hosts (also known as Witherspoon) that use the DD2.2 or DD2.3 CPU. Attempting to boot such a VM instead generates the following error message: To work around this problem, configure the virtual machine's XML configuration as follows: ( BZ#1732726 , BZ#1751054 ) IBM POWER virtual machines do not work correctly with zero memory NUMA nodes Currently, when an IBM POWER virtual machine (VM) running on a RHEL 8 host is configured with a NUMA node that uses zero memory ( memory='0' ), the VM cannot boot. Therefore, Red Hat strongly recommends not using IBM POWER VMs with zero-memory NUMA nodes on RHEL 8. (BZ#1651474) Migrating a POWER9 guest from a RHEL 7-ALT host to RHEL 8 fails Currently, migrating a POWER9 virtual machine from a RHEL 7-ALT host system to RHEL 8 becomes unresponsive with a "Migration status: active" status. To work around this problem, disable Transparent Huge Pages (THP) on the RHEL 7-ALT host, which enables the migration to complete successfully. (BZ#1741436) SMT CPU topology is not detected by VMs when using host passthrough mode on AMD EPYC When a virtual machine (VM) boots with the CPU host passthrough mode on an AMD EPYC host, the TOPOEXT CPU feature flag is not present. Consequently, the VM is not able to detect a virtual CPU topology with multiple threads per core. To work around this problem, boot the VM with the EPYC CPU model instead of host passthrough. ( BZ#1740002 ) Virtual machines sometimes fail to start when using many virtio-blk disks Adding a large number of virtio-blk devices to a virtual machine (VM) may exhaust the number of interrupt vectors available in the platform. If this occurs, the VM's guest OS fails to boot, and displays a dracut-initqueue[392]: Warning: Could not boot error. ( BZ#1719687 )
[ "subscription-manager list --available", "subscription-manager list --consumed", "/usr/share/doc/vdo/examples/ansible/vdo.py", "/usr/lib/python3.6/site-packages/ansible/modules/system/vdo.py", "yum module install php:7.3", "yum module install ruby:2.6", "yum module install nodejs:12", "yum module enable mariadb-devel:10.3 yum install Judy-devel", "yum module install nginx:1.16", "yum install gcc-toolset-9", "scl enable gcc-toolset-9 tool", "scl enable gcc-toolset-9 bash", "virt-xml testguest --start --no-define --edit --boot network", "\"Failed to add rule for system call ...\"", "DIMM location: not present. DMI handle: 0x<ADDRESS>", "'checkpointing a container requires at least CRIU 31100'", "smartpqi 0000:23:00.0: failed to allocate PQI error buffer", "xfs_info /mount-point | grep ftype", "<memtune> <hard_limit unit='KiB'>N</hard_limit> </memtune>", "update-crypto-policies --set LEGACY", "~]# yum install network-scripts", "Example contents of the `zipl.conf` file after the change:", "[defaultboot] defaultauto prompt=1 timeout=5 target=/boot secure=0", "url --url=https://SERVER/PATH --noverifyssl", "inst.ks=<URL> inst.noverifyssl", "skip_if_unavailable=false", "--setopt=*.skip_if_unavailable=True", "systemctl start systemd-resolved", "dnf module enable libselinux-python dnf install libselinux-python", "dnf module install libselinux-python:2.8/common", "NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+DHE-RSA:+AES-256-GCM:+SIGN-RSA-SHA384:+COMP-ALL:+GROUP-ALL", "NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+ECDHE-RSA:+AES-128-CBC:+SIGN-RSA-SHA1:+COMP-ALL:+GROUP-ALL", "SignatureAlgorithms = RSA+SHA256:RSA+SHA512:RSA+SHA384:ECDSA+SHA256:ECDSA+SHA512:ECDSA+SHA384 MaxProtocol = TLSv1.2", "Error generating remediation role .../remediation.sh: Exit code of oscap was 1: [output truncated]", "echo 6000 > /proc/sys/net/core/netdev_max_backlog", "opcache.huge_code_pages=0", "enable_files_domain=False", "\"UnboundLocalError: local variable 'sig' referenced before assignment\"", "The guest operating system reported that it failed with the following error code: 0x1E", "fsfreeze -f fsfreeze -u", "dracut_args --omit-drivers \"radeon\" force_rebuild 1", "[main] dhcp=dhclient", "--extra-args=\"inst.repo=https://some/url/tree/path\"", "qemu-kvm: Requested safe indirect branch capability level not supported by kvm", "<domain type='qemu' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <qemu:commandline> <qemu:arg value='-machine'/> <qemu:arg value='cap-ibs=workaround'/> </qemu:commandline>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.1_release_notes/rhel-8_1_0_release
Chapter 5. Deploying the all-in-one Red Hat OpenStack Platform environment
Chapter 5. Deploying the all-in-one Red Hat OpenStack Platform environment Procedure Log in to registry.redhat.io with your Red Hat credentials: Export the environment variables that the deployment command uses. In this example, deploy the all-in-one environment with the eth1 interface that has the IP address 192.168.25.2 on the management network: Set the hostname. If the node is using localhost.localdomain, the deployment will fail. Enter the deployment command. Ensure that you include all .yaml files relevant to your environment: After a successful deployment, you can use the clouds.yaml configuration file in the /home/USDUSER/.config/openstack directory to query and verify the OpenStack services: To access the dashboard, go to to http://192.168.25.2/dashboard and use the default username admin and the undercloud_admin_password from the ~/standalone-passwords.conf file:
[ "[stack@all-in-one]USD sudo podman login registry.redhat.io", "[stack@all-in-one]USD export IP=192.168.25.2 [stack@all-in-one]USD export NETMASK=24 [stack@all-in-one]USD export INTERFACE=eth1", "[stack@all-in-one]USD hostnamectl set-hostname all-in-one.example.net [stack@all-in-one]USD hostnamectl set-hostname all-in-one.example.net --transient", "[stack@all-in-one]USD sudo openstack tripleo deploy --templates --local-ip=USDIP/USDNETMASK -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml -r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml -e USDHOME/containers-prepare-parameters.yaml -e USDHOME/standalone_parameters.yaml --output-dir USDHOME --standalone", "[stack@all-in-one]USD export OS_CLOUD=standalone [stack@all-in-one]USD openstack endpoint list", "[stack@all-in-one]USD cat standalone-passwords.conf | grep undercloud_admin_password:" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/standalone_deployment_guide/deploying-the-all-in-one-openstack-installation
7.13. RHEA-2014:1501 - new package: libestr
7.13. RHEA-2014:1501 - new package: libestr New libestr packages are now available for Red Hat Enterprise Linux 6. The libestr packages contain the string handling essentials library used by the Rsyslog daemon, and is required by the rsyslog7 package. This enhancement update adds the libestr packages to Red Hat Enterprise Linux 6. (BZ# 966966 ) All users who require libestr are advised to install these new packages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/rhea-2014-1501
Monitoring
Monitoring OpenShift Dedicated 4 Monitoring projects in OpenShift Dedicated Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/monitoring/index
18.2. Operating System (external to RHCS) Log Settings
18.2. Operating System (external to RHCS) Log Settings 18.2.1. Enabling OS-level Audit Logs Warning All operations in the following sections have to be performed as root or a privileged user via sudo . The auditd logging framework provides many additional audit capabilities. These OS-level audit logs complement functionality provided by Certificate System directly. Before performing any of the following steps in this section, make sure the audit package is installed: Auditing of system package updates (using yum and rpm and including Certificate System) is automatically performed and requires no additional configuration. Note After adding each audit rule and restarting the auditd service, validate the new rules were added by running: The contents of the new rules should be visible in the output. For instructions on viewing the resulting audit logs, see the Displaying Operating System-level Audit Logs section in the Red Hat Certificate System Administration Guide . 18.2.1.1. Auditing Certificate System Audit Log Deletion To receive audit events for when audit logs are deleted, you need to audit system calls whose targets are Certificate System logs. Create the file /etc/audit/rules.d/rhcs-audit-log-deletion.rules with the following contents: Then restart auditd : 18.2.1.2. Auditing Unauthorized Certificate System Use of Secret Keys To receive audit events for all access to Certificate System Secret or Private keys, you need to audit the file system access to the NSS DB. Create the /etc/audit/rules.d/rhcs-audit-nssdb-access.rules file with the following contents: <instance name> is the name of the current instance. For each file (`<file>`) in /etc/pki/<instance name>/alias , add to /etc/audit/rules.d/rhcs-audit-nssdb-access.rules the following line : For example, if the instance name is pki-ca121318ec and cert9.db , key4.db , NHSM-CONN-XCcert9.db , NHSM-CONN-XCkey4.db , and pkcs11.txt are files, then the configuration file would contain: Then restart auditd : 18.2.1.3. Auditing Time Change Events To receive audit events for time changes, you need to audit a system call access which could modify the system time. Create the /etc/audit/rules.d/rhcs-audit-rhcs_audit_time_change.rules file with the following contents: Then restart auditd : For instructions on how to set time, see Setting Time and Date in Red Hat Enterprise Linux 7 in the Red Hat Certificate System Administration Guide . 18.2.1.4. Auditing Access to Certificate System Configuration To receive audit events for all modifications to the Certificate System instance configuration files, audit the file system access for these files. Create the /etc/audit/rules.d/rhcs-audit-config-access.rules file with the following contents: Additionally, add for each subsystem in the /etc/pki/ instance_name / directory the following contents: Example 18.1. rhcs-audit-config-access.rules Configuration File For example, if the instance name is pki-ca121318ec and only a CA is installed, the /etc/audit/rules.d/rhcs-audit-config-access.rules file would contain: Note that access to the PKI NSS database is already audited under rhcs_audit_nssdb .
[ "sudo yum install audit", "auditctl -l", "-a always,exit -F arch=b32 -S unlink -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b32 -S rename -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b32 -S rmdir -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b32 -S unlinkat -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b32 -S renameat -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b64 -S unlink -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b64 -S rename -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b64 -S rmdir -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b64 -S unlinkat -F dir=/var/log/pki -F key=rhcs_audit_deletion -a always,exit -F arch=b64 -S renameat -F dir=/var/log/pki -F key=rhcs_audit_deletion", "service auditd restart", "-w /etc/pki/<instance name>/alias -p warx -k rhcs_audit_nssdb", "-w /etc/pki/<instance name>/alias/<file> -p warx -k rhcs_audit_nssdb", "-w /etc/pki/pki-ca121318ec/alias -p warx -k rhcs_audit_nssdb -w /etc/pki/pki-ca121318ec/alias/cert9.db -p warx -k rhcs_audit_nssdb -w /etc/pki/pki-ca121318ec/alias/key4.db -p warx -k rhcs_audit_nssdb -w /etc/pki/pki-ca121318ec/alias/NHSM-CONN-XCcert9.db -p warx -k rhcs_audit_nssdb -w /etc/pki/pki-ca121318ec/alias/NHSM-CONN-XCkey4.db -p warx -k rhcs_audit_nssdb -w /etc/pki/pki-ca121318ec/alias/pkcs11.txt -p warx -k rhcs_audit_nssdb", "service auditd restart", "-a always,exit -F arch=b32 -S adjtimex,settimeofday,stime -F key=rhcs_audit_time_change -a always,exit -F arch=b64 -S adjtimex,settimeofday -F key=rhcs_audit_time_change -a always,exit -F arch=b32 -S clock_settime -F a0=0x0 -F key=rhcs_audit_time_change -a always,exit -F arch=b64 -S clock_settime -F a0=0x0 -F key=rhcs_audit_time_change -a always,exit -F arch=b32 -S clock_adjtime -F key=rhcs_audit_time_change -a always,exit -F arch=b64 -S clock_adjtime -F key=rhcs_audit_time_change -w /etc/localtime -p wa -k rhcs_audit_time_change", "service auditd restart", "-w /etc/pki/ instance_name /server.xml -p wax -k rhcs_audit_config", "-w /etc/pki/ instance_name / subsystem /CS.cfg -p wax -k rhcs_audit_config", "-w /etc/pki/pki-ca121318ec/server.xml -p wax -k rhcs_audit_config -w /etc/pki/pki-ca121318ec/ca/CS.cfg -p wax -k rhcs_audit_config" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/operating_system_external_to_rhcs_log_settings
Chapter 2. Managing compute machines with the Machine API
Chapter 2. Managing compute machines with the Machine API 2.1. Creating a machine set on AWS You can create a different machine set to serve a specific purpose in your OpenShift Container Platform cluster on Amazon Web Services (AWS). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.1.1. Sample YAML for a machine set custom resource on AWS This sample YAML defines a machine set that runs in the us-east-1a Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/<role>: "" 9 providerSpec: value: ami: id: ami-046fe691f52a953f9 10 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 11 instanceType: m4.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 12 region: <region> 13 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 14 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 15 tags: - name: kubernetes.io/cluster/<infrastructure_id> 16 value: owned userDataSecret: name: worker-user-data 1 3 5 11 14 16 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID, node label, and zone. 6 7 9 Specify the node label to add. 10 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) AMI for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. 12 Specify the zone, for example, us-east-1a . 13 Specify the region, for example, us-east-1 . 15 Specify the infrastructure ID and zone. 2.1.2. Creating a machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml If you need compute machine sets in other availability zones, repeat this process to create more compute machine sets. Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again. 2.1.3. Machine sets that deploy machines as Spot Instances You can save on costs by creating a machine set running on AWS that deploys machines as non-guaranteed Spot Instances. Spot Instances utilize unused AWS EC2 capacity and are less expensive than On-Demand Instances. You can use Spot Instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads. AWS EC2 can terminate a Spot Instance at any time. AWS gives a two-minute warning to the user when an interruption occurs. OpenShift Container Platform begins to remove the workloads from the affected instances when AWS issues the termination warning. Interruptions can occur when using Spot Instances for the following reasons: The instance price exceeds your maximum price The demand for Spot Instances increases The supply of Spot Instances decreases When AWS terminates an instance, a termination handler running on the Spot Instance node deletes the machine resource. To satisfy the machine set replicas quantity, the machine set creates a machine that requests a Spot Instance. 2.1.4. Creating Spot Instances by using machine sets You can launch a Spot Instance on AWS by adding spotMarketOptions to your machine set YAML file. Procedure Add the following line under the providerSpec field: providerSpec: value: spotMarketOptions: {} You can optionally set the spotMarketOptions.maxPrice field to limit the cost of the Spot Instance. For example you can set maxPrice: '2.50' . If the maxPrice is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to charge up to the On-Demand Instance price. Note It is strongly recommended to use the default On-Demand price as the maxPrice value and to not set the maximum price for Spot Instances. 2.1.5. Machine sets that deploy machines as Dedicated Instances You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account. Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware. 2.1.6. Creating Dedicated Instances by using machine sets You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy field in your machine set YAML file to launch a Dedicated Instance on AWS. Procedure Specify a dedicated tenancy under the providerSpec field: providerSpec: placement: tenancy: dedicated 2.2. Creating a machine set on Azure You can create a different machine set to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.2.1. Sample YAML for a machine set custom resource on Azure This sample YAML defines a machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 11 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 12 offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: "" sshPublicKey: "" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: "1" 21 1 5 7 15 16 17 20 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 3 8 9 11 18 19 Specify the node label to add. 4 6 10 Specify the infrastructure ID, node label, and region. 12 Specify the image details for your machine set. If you want to use an Azure Marketplace image, see "Selecting an Azure Marketplace image". 13 Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 14 Specify the region to place machines on. 21 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 2.2.2. Creating a machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again. 2.2.3. Selecting an Azure Marketplace image You can create a machine set running on Azure that deploys machines that use the Azure Marketplace offering. To use this offering, you must first obtain the Azure Marketplace image. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you are going to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 Note Regardless of the version of OpenShift Container Platform you are installing, the correct version of the Azure Marketplace image to use is 4.8.x. If required, as part of the installation process, your VMs are automatically upgraded. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer, specifically the values for publisher , offer , sku , and version . Add the following parameters to the providerSpec section of your machine set YAML file using the image details for your offer: Sample providerSpec image values for Azure Marketplace compute machines providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: "" sku: rh-ocp-worker type: MarketplaceWithPlan version: 4.8.2021122100 2.2.4. Machine sets that deploy machines as Spot VMs You can save on costs by creating a machine set running on Azure that deploys machines as non-guaranteed Spot VMs. Spot VMs utilize unused Azure capacity and are less expensive than standard VMs. You can use Spot VMs for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads. Azure can terminate a Spot VM at any time. Azure gives a 30-second warning to the user when an interruption occurs. OpenShift Container Platform begins to remove the workloads from the affected instances when Azure issues the termination warning. Interruptions can occur when using Spot VMs for the following reasons: The instance price exceeds your maximum price The supply of Spot VMs decreases Azure needs capacity back When Azure terminates an instance, a termination handler running on the Spot VM node deletes the machine resource. To satisfy the machine set replicas quantity, the machine set creates a machine that requests a Spot VM. 2.2.5. Creating Spot VMs by using machine sets You can launch a Spot VM on Azure by adding spotVMOptions to your machine set YAML file. Procedure Add the following line under the providerSpec field: providerSpec: value: spotVMOptions: {} You can optionally set the spotVMOptions.maxPrice field to limit the cost of the Spot VM. For example you can set maxPrice: '0.98765' . If the maxPrice is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to -1 and charges up to the standard VM price. Azure caps Spot VM prices at the standard price. Azure will not evict an instance due to pricing if the instance is set with the default maxPrice . However, an instance can still be evicted due to capacity restrictions. Note It is strongly recommended to use the default standard VM price as the maxPrice value and to not set the maximum price for Spot VMs. 2.2.6. Enabling customer-managed encryption keys for a machine set You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API. An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must preside in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set. Prerequisites Create an Azure Key Vault instance . Create an instance of a disk encryption set . Grant the disk encryption set access to key vault . Procedure Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example: ... providerSpec: value: ... osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS ... Additional resources You can learn more about customer-managed keys in the Azure documentation. 2.3. Creating a machine set on GCP You can create a different machine set to serve a specific purpose in your OpenShift Container Platform cluster on Google Cloud Platform (GCP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.3.1. Sample YAML for a machine set custom resource on GCP This sample YAML defines a machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 For <node> , specify the node label to add. 3 Specify the path to the image that is used in current compute machine sets. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-48-x86-64-202206140145 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-48-x86-64-202206140145 4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 5 For <project_name> , specify the name of the GCP project that you use for your cluster. 2.3.2. Creating a machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again. 2.3.3. Machine sets that deploy machines as preemptible VM instances You can save on costs by creating a machine set running on GCP that deploys machines as non-guaranteed preemptible VM instances. Preemptible VM instances utilize excess Compute Engine capacity and are less expensive than normal instances. You can use preemptible VM instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads. GCP Compute Engine can terminate a preemptible VM instance at any time. Compute Engine sends a preemption notice to the user indicating that an interruption will occur in 30 seconds. OpenShift Container Platform begins to remove the workloads from the affected instances when Compute Engine issues the preemption notice. An ACPI G3 Mechanical Off signal is sent to the operating system after 30 seconds if the instance is not stopped. The preemptible VM instance is then transitioned to a TERMINATED state by Compute Engine. Interruptions can occur when using preemptible VM instances for the following reasons: There is a system or maintenance event The supply of preemptible VM instances decreases The instance reaches the end of the allotted 24-hour period for preemptible VM instances When GCP terminates an instance, a termination handler running on the preemptible VM instance node deletes the machine resource. To satisfy the machine set replicas quantity, the machine set creates a machine that requests a preemptible VM instance. 2.3.4. Creating preemptible VM instances by using machine sets You can launch a preemptible VM instance on GCP by adding preemptible to your machine set YAML file. Procedure Add the following line under the providerSpec field: providerSpec: value: preemptible: true If preemptible is set to true , the machine is labelled as an interruptable-instance after the instance is launched. 2.3.5. Enabling customer-managed encryption keys for a machine set Google Cloud Platform (GCP) Compute Engine allows users to supply an encryption key to encrypt data on disks at rest. The key is used to encrypt the data encryption key, not to encrypt the customer's data. By default, Compute Engine encrypts this data by using Compute Engine keys. You can enable encryption with a customer-managed key by using the Machine API. You must first create a KMS key and assign the correct permissions to a service account. The KMS key name, key ring name, and location are required to allow a service account to use your key. Note If you do not want to use a dedicated service account for the KMS encryption, the Compute Engine default service account is used instead. You must grant the default service account permission to access the keys if you do not use a dedicated service account. The Compute Engine default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. Procedure Run the following command with your KMS key name, key ring name, and location to allow a specific service account to use your KMS key and to grant the service account the correct IAM role: gcloud kms keys add-iam-policy-binding <key_name> \ --keyring <key_ring_name> \ --location <key_ring_location> \ --member "serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com" \ --role roles/cloudkms.cryptoKeyEncrypterDecrypter Configure the encryption key under the providerSpec field in your machine set YAML file. For example: providerSpec: value: # ... disks: - type: # ... encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5 1 The name of the customer-managed encryption key that is used for the disk encryption. 2 The name of the KMS key ring that the KMS key belongs to. 3 The GCP location in which the KMS key ring exists. 4 Optional: The ID of the project in which the KMS key ring exists. If a project ID is not set, the machine set projectID in which the machine set was created is used. 5 Optional: The service account that is used for the encryption request for the given KMS key. If a service account is not set, the Compute Engine default service account is used. After a new machine is created by using the updated providerSpec object configuration, the disk encryption key is encrypted with the KMS key. 2.4. Creating a machine set on OpenStack You can create a different machine set to serve a specific purpose in your OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.4.1. Sample YAML for a machine set custom resource on RHOSP This sample YAML defines a machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone> 1 5 7 13 15 16 17 18 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 19 Specify the node label to add. 4 6 10 Specify the infrastructure ID and node label. 11 To set a server group policy for the MachineSet, enter the value that is returned from creating a server group . For most deployments, anti-affinity or soft-anti-affinity policies are recommended. 12 Required for deployments to multiple networks. To specify multiple networks, add another entry in the networks array. Also, you must include the network that is used as the primarySubnet value. 14 Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file. 2.4.2. Sample YAML for a machine set custom resource that uses SR-IOV on RHOSP If you configured your cluster for single-root I/O virtualization (SR-IOV), you can create machine sets that use that technology. This sample YAML defines a machine set that uses SR-IOV networks. The nodes that it creates are labeled with node-role.openshift.io/<node_role>: "" In this sample, infrastructure_id is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and node_role is the node label to add. The sample assumes two SR-IOV networks that are named "radio" and "uplink". The networks are used in port definitions in the spec.template.spec.providerSpec.value.ports list. Note Only parameters that are specific to SR-IOV deployments are described in this sample. To review a more general sample, see "Sample YAML for a machine set custom resource on RHOSP". An example machine set that uses SR-IOV networks apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> configDrive: true 9 1 5 Enter a network UUID for each port. 2 6 Enter a subnet UUID for each port. 3 7 The value of the vnicType parameter must be direct for each port. 4 8 The value of the portSecurity parameter must be false for each port. You cannot set security groups and allowed address pairs for ports when port security is disabled. Setting security groups on the instance applies the groups to all ports that are attached to it. 9 The value of the configDrive parameter must be true . Note Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix> . The nameSuffix field is required in port definitions. You can enable trunking for each port. Optionally, you can add tags to ports as part of their tags lists. Additional resources Installing a cluster on OpenStack that supports SR-IOV-connected compute machines 2.4.3. Sample YAML for SR-IOV deployments where port security is disabled To create single-root I/O virtualization (SR-IOV) ports on a network that has port security disabled, define a machine set that includes the ports as items in the spec.template.spec.providerSpec.value.ports list. This difference from the standard SR-IOV machine set is due to the automatic security group and allowed address pair configuration that occurs for ports that are created by using the network and subnet interfaces. Ports that you define for machines subnets require: Allowed address pairs for the API and ingress virtual IP ports The compute security group Attachment to the machines network and subnet Note Only parameters that are specific to SR-IOV deployments where port security is disabled are described in this sample. To review a more general sample, see Sample YAML for a machine set custom resource that uses SR-IOV on RHOSP". An example machine set that uses SR-IOV networks and has port security disabled apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data configDrive: True 1 Specify allowed address pairs for the API and ingress ports. 2 3 Specify the machines network and subnet. 4 Specify the compute machines security group. Note Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix> . The nameSuffix field is required in port definitions. You can enable trunking for each port. Optionally, you can add tags to ports as part of their tags lists. If your cluster uses Kuryr and the RHOSP SR-IOV network has port security disabled, the primary port for compute machines must have: The value of the spec.template.spec.providerSpec.value.networks.portSecurityEnabled parameter set to false . For each subnet, the value of the spec.template.spec.providerSpec.value.networks.subnets.portSecurityEnabled parameter set to false . The value of spec.template.spec.providerSpec.value.securityGroups set to empty: [] . An example section of a machine set for a cluster on Kuryr that uses SR-IOV and has port security disabled ... networks: - subnets: - uuid: <machines_subnet_UUID> portSecurityEnabled: false portSecurityEnabled: false securityGroups: [] ... In that case, you can apply the compute security group to the primary VM interface after the VM is created. For example, from a command line: USD openstack port set --enable-port-security --security-group <infrastructure_id>-<node_role> <main_port_ID> 2.4.4. Creating a machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again. 2.5. Creating a machine set on RHV You can create a different machine set to serve a specific purpose in your OpenShift Container Platform cluster on Red Hat Virtualization (RHV). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.5.1. Sample YAML for a machine set custom resource on RHV This sample YAML defines a machine set that runs on RHV and creates nodes that are labeled with node-role.kubernetes.io/<node_role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: "" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 instance_type_id: <instance_type_id> 16 cpu: 17 sockets: <number_of_sockets> 18 cores: <number_of_cores> 19 threads: <number_of_threads> 20 memory_mb: <memory_size> 21 os_disk: 22 size_gb: <disk_size> 23 network_interfaces: 24 vnic_profile_id: <vnic_profile_id> 25 credentialsSecret: name: ovirt-credentials 26 kind: OvirtMachineProviderSpec type: <workload_type> 27 auto_pinning_policy: <auto_pinning_policy> 28 hugepages: <hugepages> 29 affinityGroupsNames: - compute 30 userDataSecret: name: worker-user-data 1 7 9 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 10 11 13 Specify the node label to add. 4 8 12 Specify the infrastructure ID and node label. These two strings together cannot be longer than 35 characters. 5 Specify the number of machines to create. 6 Selector for the machines. 14 Specify the UUID for the RHV cluster to which this VM instance belongs. 15 Specify the RHV VM template to use to create the machine. 16 Optional: Specify the VM instance type. Warning The instance_type_id field is deprecated and will be removed in a future release. If you include this parameter, you do not need to specify the hardware parameters of the VM including CPU and memory because this parameter overrides all hardware parameters. 17 Optional: The CPU field contains the CPU's configuration, including sockets, cores, and threads. 18 Optional: Specify the number of sockets for a VM. 19 Optional: Specify the number of cores per socket. 20 Optional: Specify the number of threads per core. 21 Optional: Specify the size of a VM's memory in MiB. 22 Optional: Root disk of the node. 23 Optional: Specify the size of the bootable disk in GiB. 24 Optional: List of the network interfaces of the VM. If you include this parameter, OpenShift Container Platform discards all network interfaces from the template and creates new ones. 25 Optional: Specify the vNIC profile ID. 26 Specify the name of the secret that holds the RHV credentials. 27 Optional: Specify the workload type for which the instance is optimized. This value affects the RHV VM parameter. Supported values: desktop , server (default), high_performance . high_performance improves performance on the VM, but there are limitations. For example, you cannot access the VM with a graphical console. For more information, see Configuring High Performance Virtual Machines, Templates, and Pools in the Virtual Machine Management Guide . 28 Optional: AutoPinningPolicy defines the policy that automatically sets CPU and NUMA settings, including pinning to the host for this instance. Supported values: none , resize_and_pin . For more information, see Setting NUMA Nodes in the Virtual Machine Management Guide . 29 Optional: Hugepages is the size in KiB for defining hugepages in a VM. Supported values: 2048 or 1048576 . For more information, see Configuring Huge Pages in the Virtual Machine Management Guide . 30 Optional: A list of affinity group names that should be applied to the VMs. The affinity groups must exist in oVirt. Note Because RHV uses a template when creating a VM, if you do not specify a value for an optional parameter, RHV uses the value for that parameter that is specified in the template. 2.5.2. Creating a machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again. 2.6. Creating a machine set on vSphere You can create a different machine set to serve a specific purpose in your OpenShift Container Platform cluster on VMware vSphere. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.6.1. Sample YAML for a machine set custom resource on vSphere This sample YAML defines a machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: "<vm_network_name>" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and node label. 6 7 9 Specify the node label to add. 10 Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster. 11 Specify the vSphere VM template to use, such as user-5ddjd-rhcos . 12 Specify the vCenter Datacenter to deploy the compute machine set on. 13 Specify the vCenter Datastore to deploy the compute machine set on. 14 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 15 Specify the vSphere resource pool for your VMs. 16 Specify the vCenter server IP or fully qualified domain name. 2.6.2. Minimum required vCenter privileges for machine set management To manage machine sets in an OpenShift Container Platform cluster on vCenter, you must use an account with privileges to read, create, and delete the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the minimum required privileges. The following table lists the minimum vCenter roles and privileges that are required to create, scale, and delete machine sets and to delete machines in your OpenShift Container Platform cluster. Example 2.1. Minimum vCenter roles and privileges required for machine set management vSphere object for role When required Required privileges vSphere vCenter Always InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update 1 StorageProfile.View 1 vSphere vCenter Cluster Always Resource.AssignVMToPool vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse vSphere Port Group Always Network.Assign Virtual Machine Folder Always VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.Memory VirtualMachine.Config.Settings VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone vSphere vCenter Datacenter If the installation program creates the virtual machine folder Resource.AssignVMToPool VirtualMachine.Provisioning.DeployTemplate 1 The StorageProfile.Update and StorageProfile.View permissions are required only for storage backends that use the Container Storage Interface (CSI). Important Some CSI drivers and features are in Technology Preview in OpenShift Container Platform 4.9. For more information, see CSI drivers supported by OpenShift Container Platform . The following table details the permissions and propagation settings that are required for machine set management. Example 2.2. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always Not required Listed required privileges vSphere vCenter Datacenter Existing folder Not required ReadOnly permission Installation program creates the folder Required Listed required privileges vSphere vCenter Cluster Always Required Listed required privileges vSphere vCenter Datastore Always Not required Listed required privileges vSphere Switch Always Not required ReadOnly permission vSphere Port Group Always Not required Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder Required Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Additional resources For more information about CSI driver and feature support, see CSI drivers supported by OpenShift Container Platform . 2.6.3. Requirements for clusters with user-provisioned infrastructure to use compute machine sets To use compute machine sets on clusters that have user-provisioned infrastructure, you must ensure that you cluster configuration supports using the Machine API. Obtaining the infrastructure ID To create compute machine sets, you must be able to supply the infrastructure ID for your cluster. Procedure To obtain the infrastructure ID for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}' Satisfying vSphere credentials requirements To use compute machine sets, the Machine API must be able to interact with vCenter. Credentials that authorize the Machine API components to interact with vCenter must exist in a secret in the openshift-machine-api namespace. Procedure To determine whether the required credentials exist, run the following command: USD oc get secret \ -n openshift-machine-api vsphere-cloud-credentials \ -o go-template='{{range USDk,USDv := .data}}{{printf "%s: " USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{"\n"}}{{end}}' Sample output <vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user> where <vcenter-server> is the IP address or fully qualified domain name (FQDN) of the vCenter server and <openshift-user> and <openshift-user-password> are the OpenShift Container Platform administrator credentials to use. If the secret does not exist, create it by running the following command: USD oc create secret generic vsphere-cloud-credentials \ -n openshift-machine-api \ --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password> Satisfying Ignition configuration requirements Provisioning virtual machines (VMs) requires a valid Ignition configuration. The Ignition configuration contains the machine-config-server address and a system trust bundle for obtaining further Ignition configurations from the Machine Config Operator. By default, this configuration is stored in the worker-user-data secret in the the machine-api-operator namespace. Compute machine sets reference the secret during the machine creation process. Procedure To determine whether the required secret exists, run the following command: USD oc get secret \ -n openshift-machine-api worker-user-data \ -o go-template='{{range USDk,USDv := .data}}{{printf "%s: " USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{"\n"}}{{end}}' Sample output disableTemplating: false userData: 1 { "ignition": { ... }, ... } 1 The full output is omitted here, but should have this format. If the secret does not exist, create it by running the following command: USD oc create secret generic worker-user-data \ -n openshift-machine-api \ --from-file=<installation_directory>/worker.ign where <installation_directory> is the directory that was used to store your installation assets during cluster installation. Additional resources Understanding the Machine Config Operator Installing RHCOS and starting the OpenShift Container Platform bootstrap process 2.6.4. Creating a machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Note Clusters that are installed with user-provisioned infrastructure have a different networking stack than clusters with infrastructure that is provisioned by the installation program. As a result of this difference, automatic load balancer management is unsupported on clusters that have user-provisioned infrastructure. For these clusters, a compute machine set can only create worker and infra type machines. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Have the necessary permissions to deploy VMs in your vCenter instance and have the required access to the datastore specified. If your cluster uses user-provisioned infrastructure, you have satisfied the specific Machine API requirements for that configuration. Procedure Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. If you are creating a compute machine set for a cluster that has user-provisioned infrastructure, note the following important values: Example vSphere providerSpec values apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... template: ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: "<vm_network_name>" numCPUs: 4 numCoresPerSocket: 4 snapshot: "" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_datacenter_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4 1 The name of the secret in the openshift-machine-api namespace that contains the required vCenter credentials. 2 The name of the RHCOS VM template for your cluster that was created during installation. 3 The name of the secret in the openshift-machine-api namespace that contains the required Ignition configuration credentials. 4 The IP address or fully qualified domain name (FQDN) of the vCenter server. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again.
[ "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: ami: id: ami-046fe691f52a953f9 10 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 11 instanceType: m4.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 12 region: <region> 13 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 14 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 15 tags: - name: kubernetes.io/cluster/<infrastructure_id> 16 value: owned userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "providerSpec: value: spotMarketOptions: {}", "providerSpec: placement: tenancy: dedicated", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 11 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 12 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: \"1\" 21", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 4.8.2021122100", "providerSpec: value: spotVMOptions: {}", "providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "providerSpec: value: preemptible: true", "gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter", "providerSpec: value: # disks: - type: # encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone>", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> configDrive: true 9", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data configDrive: True", "networks: - subnets: - uuid: <machines_subnet_UUID> portSecurityEnabled: false portSecurityEnabled: false securityGroups: []", "openstack port set --enable-port-security --security-group <infrastructure_id>-<node_role> <main_port_ID>", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 instance_type_id: <instance_type_id> 16 cpu: 17 sockets: <number_of_sockets> 18 cores: <number_of_cores> 19 threads: <number_of_threads> 20 memory_mb: <memory_size> 21 os_disk: 22 size_gb: <disk_size> 23 network_interfaces: 24 vnic_profile_id: <vnic_profile_id> 25 credentialsSecret: name: ovirt-credentials 26 kind: OvirtMachineProviderSpec type: <workload_type> 27 auto_pinning_policy: <auto_pinning_policy> 28 hugepages: <hugepages> 29 affinityGroupsNames: - compute 30 userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'", "oc get secret -n openshift-machine-api vsphere-cloud-credentials -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'", "<vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user>", "oc create secret generic vsphere-cloud-credentials -n openshift-machine-api --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password>", "oc get secret -n openshift-machine-api worker-user-data -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'", "disableTemplating: false userData: 1 { \"ignition\": { }, }", "oc create secret generic worker-user-data -n openshift-machine-api --from-file=<installation_directory>/worker.ign", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" numCPUs: 4 numCoresPerSocket: 4 snapshot: \"\" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_datacenter_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/machine_management/managing-compute-machines-with-the-machine-api
34.2. Removing Red Hat Enterprise Linux from IBM Z
34.2. Removing Red Hat Enterprise Linux from IBM Z If you want to delete the existing operating system data, first, if any Linux disks contain sensitive data, ensure that you destroy the data according to your security policy. To proceed you can consider these options: Overwrite the disks with a new installation. Make the DASD or SCSI disk where Linux was installed visible from another system, then delete the data. However, this might require special privileges. Ask your system administrator for advice. You can use Linux commands such as dasdfmt (DASD only), parted , mke2fs or dd . For more details about the commands, see the respective man pages. 34.2.1. Running a Different Operating System on Your z/VM Guest or LPAR If you want to boot from a DASD or SCSI disk different from where the currently installed system resides under a z/VM guest virtual machine or an LPAR, shut down the Red Hat Enterprise Linux installed and use the desired disk, where another Linux instance is installed, to boot from. This leaves the contents of the installed system unchanged.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-uninstall-rhel-s390
Chapter 9. Visualizing your costs with Cost Explorer
Chapter 9. Visualizing your costs with Cost Explorer Use cost management Cost Explorer to create custom graphs of time-scaled cost and usage information and ultimately better visualize and interpret your costs. To learn more about the following topics, see Visualizing your costs using Cost Explorer : Using Cost Explorer to identify abnormal events Understanding how your cost data changes over time Creating custom bar charts of your cost and usage data Exporting custom cost data tables
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_amazon_web_services_aws_data_into_cost_management/cost-explorer-next-step_next-steps-aws
10.5. Library Changes
10.5. Library Changes 32-bit libraries are not installed by default on Red Hat Enterprise Linux 6. You can change this behavior by setting multilib_policy=all in /etc/yum.conf , which will enable multilib policy as a system-wide policy.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-migration_guide-package_changes-library_changes
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/authorization_of_web_endpoints/making-open-source-more-inclusive
3.5. Performing the PXE Installation
3.5. Performing the PXE Installation For instructions on how to configure the network interface card with PXE support to boot from the network, consult the documentation for the NIC. It varies slightly per card. After the system boots the installation program, refer to the Installation Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/PXE_Network_Installations-Performing_the_PXE_Installation
4. We Need Feedback!
4. We Need Feedback! If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla: http://bugzilla.redhat.com/ against the product Red Hat Enterprise Linux 6 and the component doc-DM_Multipath . When submitting a bug report, be sure to mention the manual's identifier: If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, include the section number and some of the surrounding text so we can find it easily.
[ "rh-DM_Multipath(EN)-6 (2017-3-8T15:15)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/sect-redhat-we_need_feedback
Chapter 5. Control plane backup and restore
Chapter 5. Control plane backup and restore 5.1. Backing up etcd etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects. Back up your cluster's etcd data regularly and store in a secure location ideally outside the OpenShift Container Platform environment. Do not take an etcd backup before the first certificate rotation completes, which occurs 24 hours after installation, otherwise the backup will contain expired certificates. It is also recommended to take etcd backups during non-peak usage hours because the etcd snapshot has a high I/O cost. Be sure to take an etcd backup after you upgrade your cluster. This is important because when you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.y.z cluster must use an etcd backup that was taken from 4.y.z. Important Back up your cluster's etcd data by performing a single invocation of the backup script on a control plane host. Do not take a backup for each control plane host. After you have an etcd backup, you can restore to a cluster state . 5.1.1. Backing up etcd data Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd. Important Only save a backup from a single control plane host. Do not take a backup from each control plane host in the cluster. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have checked whether the cluster-wide proxy is enabled. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Procedure Start a debug session as root for a control plane node: USD oc debug --as-root node/<node_name> Change your root directory to /host in the debug shell: sh-4.4# chroot /host If the cluster-wide proxy is enabled, export the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables by running the following commands: USD export HTTP_PROXY=http://<your_proxy.example.com>:8080 USD export HTTPS_PROXY=https://<your_proxy.example.com>:8080 USD export NO_PROXY=<example.com> Run the cluster-backup.sh script in the debug shell and pass in the location to save the backup to. Tip The cluster-backup.sh script is maintained as a component of the etcd Cluster Operator and is a wrapper around the etcdctl snapshot save command. sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup Example script output found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {"level":"info","ts":1624647639.0188997,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part"} {"level":"info","ts":"2021-06-25T19:00:39.030Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"} {"level":"info","ts":1624647639.0301006,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"} {"level":"info","ts":"2021-06-25T19:00:40.215Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"} {"level":"info","ts":1624647640.6032252,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"114 MB","took":1.584090459} {"level":"info","ts":1624647640.6047094,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {"hash":3866667823,"revision":31407,"totalKey":12828,"totalSize":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup In this example, two files are created in the /home/core/assets/backup/ directory on the control plane host: snapshot_<datetimestamp>.db : This file is the etcd snapshot. The cluster-backup.sh script confirms its validity. static_kuberesources_<datetimestamp>.tar.gz : This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot. Note If etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot. Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted. 5.1.2. Additional resources Backing up and restoring etcd on a hosted cluster 5.2. Replacing an unhealthy etcd member This document describes the process to replace a single unhealthy etcd member. This process depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or whether it is unhealthy because the etcd pod is crashlooping. Note If you have lost the majority of your control plane hosts, follow the disaster recovery procedure to restore to a cluster state instead of this procedure. If the control plane certificates are not valid on the member being replaced, then you must follow the procedure to recover from expired control plane certificates instead of this procedure. If a control plane node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member. 5.2.1. Prerequisites Take an etcd backup prior to replacing an unhealthy etcd member. 5.2.2. Identifying an unhealthy etcd member You can identify if your cluster has an unhealthy etcd member. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Check the status of the EtcdMembersAvailable status condition using the following command: USD oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}' Review the output: 2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy This example output shows that the ip-10-0-131-183.ec2.internal etcd member is unhealthy. 5.2.3. Determining the state of the unhealthy etcd member The steps to replace an unhealthy etcd member depend on which of the following states your etcd member is in: The machine is not running or the node is not ready The etcd pod is crashlooping This procedure determines which state your etcd member is in. This enables you to know which procedure to follow to replace the unhealthy etcd member. Note If you are aware that the machine is not running or the node is not ready, but you expect it to return to a healthy state soon, then you do not need to perform a procedure to replace the etcd member. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have identified an unhealthy etcd member. Procedure Determine if the machine is not running : USD oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v running Example output ip-10-0-131-183.ec2.internal stopped 1 1 This output lists the node and the status of the node's machine. If the status is anything other than running , then the machine is not running . If the machine is not running , then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure. Determine if the node is not ready . If either of the following scenarios are true, then the node is not ready . If the machine is running, then check whether the node is unreachable: USD oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachable Example output ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable 1 1 If the node is listed with an unreachable taint, then the node is not ready . If the node is still reachable, then check whether the node is listed as NotReady : USD oc get nodes -l node-role.kubernetes.io/master | grep "NotReady" Example output ip-10-0-131-183.ec2.internal NotReady master 122m v1.26.0 1 1 If the node is listed as NotReady , then the node is not ready . If the node is not ready , then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure. Determine if the etcd pod is crashlooping . If the machine is running and the node is ready, then check whether the etcd pod is crashlooping. Verify that all control plane nodes are listed as Ready : USD oc get nodes -l node-role.kubernetes.io/master Example output NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.26.0 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.26.0 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.26.0 Check whether the status of an etcd pod is either Error or CrashloopBackoff : USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m 1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m 1 Since this status of this pod is Error , then the etcd pod is crashlooping . If the etcd pod is crashlooping , then follow the Replacing an unhealthy etcd member whose etcd pod is crashlooping procedure. 5.2.4. Replacing the unhealthy etcd member Depending on the state of your unhealthy etcd member, use one of the following procedures: Replacing an unhealthy etcd member whose machine is not running or whose node is not ready Replacing an unhealthy etcd member whose etcd pod is crashlooping Replacing an unhealthy stopped baremetal etcd member 5.2.4.1. Replacing an unhealthy etcd member whose machine is not running or whose node is not ready This procedure details the steps to replace an etcd member that is unhealthy either because the machine is not running or because the node is not ready. Note If your cluster uses a control plane machine set, see "Recovering a degraded etcd Operator" in "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. Prerequisites You have identified the unhealthy etcd member. You have verified that either the machine is not running or the node is not ready. Important You must wait if the other control plane nodes are powered off. The control plane nodes must remain powered off until the replacement of an unhealthy etcd member is complete. You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues. Procedure Remove the unhealthy member. Choose a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m Connect to the running etcd container, passing in the name of a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 6fc1e7c9db35841d | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure. The USD etcdctl endpoint health command will list the removed member until the procedure of replacement is finished and a new member is added. Remove the unhealthy etcd member by providing the ID to the etcdctl member remove command: sh-4.2# etcdctl member remove 6fc1e7c9db35841d Example output Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346 View the member list again and verify that the member was removed: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ You can now exit the node shell. Turn off the quorum guard by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. Important After you turn off the quorum guard, the cluster might be unreachable for a short time while the remaining etcd instances reboot to reflect the configuration change. Note etcd cannot tolerate any additional member failure when running with two members. Restarting either remaining member breaks the quorum and causes downtime in your cluster. The quorum guard protects etcd from restarts due to configuration changes that could cause downtime, so it must be disabled to complete this procedure. Delete the affected node by running the following command: USD oc delete node <node_name> Example command USD oc delete node ip-10-0-131-183.ec2.internal Remove the old secrets for the unhealthy etcd member that was removed. List the secrets for the unhealthy etcd member that was removed. USD oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1 1 Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure. There is a peer, serving, and metrics secret as shown in the following output: Example output etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m Delete the secrets for the unhealthy etcd member that was removed. Delete the peer secret: USD oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal Delete the serving secret: USD oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal Delete the metrics secret: USD oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal Delete and re-create the control plane machine. After this machine is re-created, a new revision is forced and etcd scales up automatically. If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new master by using the same method that was used to originally create it. Obtain the machine for the unhealthy member. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 This is the control plane machine for the unhealthy node, ip-10-0-131-183.ec2.internal . Delete the machine of the unhealthy member: USD oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1 1 Specify the name of the control plane machine for the unhealthy node. A new machine is automatically provisioned after deleting the machine of the unhealthy member. Verify that a new machine has been created: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 The new machine, clustername-8qw5l-master-3 is being created and is ready once the phase changes from Provisioning to Running . It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Note Verify the subnet IDs that you are using for your machine sets to ensure that they end up in the correct availability zone. Turn the quorum guard back on by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by entering this command: USD oc get etcd/cluster -oyaml If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator: Example output EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again] Verification Verify that all etcd pods are running properly. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m If the output from the command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. Verify that there are exactly three etcd members. Connect to the running etcd container, passing in the name of a pod that was not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 5eb0d6b8ca24730c | started | ip-10-0-133-53.ec2.internal | https://10.0.133.53:2380 | https://10.0.133.53:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ If the output from the command lists more than three etcd members, you must carefully remove the unwanted member. Warning Be sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss. Additional resources Recovering a degraded etcd Operator 5.2.4.2. Replacing an unhealthy etcd member whose etcd pod is crashlooping This procedure details the steps to replace an etcd member that is unhealthy because the etcd pod is crashlooping. Prerequisites You have identified the unhealthy etcd member. You have verified that the etcd pod is crashlooping. You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues. Procedure Stop the crashlooping etcd pod. Debug the node that is crashlooping. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc debug node/ip-10-0-131-183.ec2.internal 1 1 Replace this with the name of the unhealthy node. Change your root directory to /host : sh-4.2# chroot /host Move the existing etcd pod file out of the kubelet manifest directory: sh-4.2# mkdir /var/lib/etcd-backup sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/ Move the etcd data directory to a different location: sh-4.2# mv /var/lib/etcd/ /tmp You can now exit the node shell. Remove the unhealthy member. Choose a pod that is not on the affected node. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m Connect to the running etcd container, passing in the name of a pod that is not on the affected node. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 62bcf33650a7170a | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure. Remove the unhealthy etcd member by providing the ID to the etcdctl member remove command: sh-4.2# etcdctl member remove 62bcf33650a7170a Example output Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346 View the member list again and verify that the member was removed: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ You can now exit the node shell. Turn off the quorum guard by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. Remove the old secrets for the unhealthy etcd member that was removed. List the secrets for the unhealthy etcd member that was removed. USD oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1 1 Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure. There is a peer, serving, and metrics secret as shown in the following output: Example output etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m Delete the secrets for the unhealthy etcd member that was removed. Delete the peer secret: USD oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal Delete the serving secret: USD oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal Delete the metrics secret: USD oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal Force etcd redeployment. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. When the etcd cluster Operator performs a redeployment, it ensures that all control plane nodes have a functioning etcd pod. Turn the quorum guard back on by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by entering this command: USD oc get etcd/cluster -oyaml If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator: Example output EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again] Verification Verify that the new member is available and healthy. Connect to the running etcd container again. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal Verify that all members are healthy: sh-4.2# etcdctl endpoint health Example output https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms 5.2.4.3. Replacing an unhealthy bare metal etcd member whose machine is not running or whose node is not ready This procedure details the steps to replace a bare metal etcd member that is unhealthy either because the machine is not running or because the node is not ready. If you are running installer-provisioned infrastructure or you used the Machine API to create your machines, follow these steps. Otherwise you must create the new control plane node using the same method that was used to originally create it. Prerequisites You have identified the unhealthy bare metal etcd member. You have verified that either the machine is not running or the node is not ready. You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important You must take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues. Procedure Verify and remove the unhealthy member. Choose a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd -o wide Example output etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none> etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none> etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none> Connect to the running etcd container, passing in the name of a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-openshift-control-plane-0 View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380/ | https://192.168.10.9:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ Take note of the ID and the name of the unhealthy etcd member, because these values are required later in the procedure. The etcdctl endpoint health command will list the removed member until the replacement procedure is completed and the new member is added. Remove the unhealthy etcd member by providing the ID to the etcdctl member remove command: Warning Be sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss. sh-4.2# etcdctl member remove 7a8197040a5126c8 Example output Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1b View the member list again and verify that the member was removed: sh-4.2# etcdctl member list -w table Example output +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | cc3830a72fc357f9 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ You can now exit the node shell. Important After you remove the member, the cluster might be unreachable for a short time while the remaining etcd instances reboot. Turn off the quorum guard by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. Remove the old secrets for the unhealthy etcd member that was removed by running the following commands. List the secrets for the unhealthy etcd member that was removed. USD oc get secrets -n openshift-etcd | grep openshift-control-plane-2 Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure. There is a peer, serving, and metrics secret as shown in the following output: etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134m Delete the secrets for the unhealthy etcd member that was removed. Delete the peer secret: USD oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd secret "etcd-peer-openshift-control-plane-2" deleted Delete the serving secret: USD oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd secret "etcd-serving-metrics-openshift-control-plane-2" deleted Delete the metrics secret: USD oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd secret "etcd-serving-openshift-control-plane-2" deleted Obtain the machine for the unhealthy member. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned 1 This is the control plane machine for the unhealthy node, examplecluster-control-plane-2 . Ensure that the Bare Metal Operator is available by running the following command: USD oc get clusteroperator baremetal Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.13.0 True False False 3d15h Remove the old BareMetalHost object by running the following command: USD oc delete bmh openshift-control-plane-2 -n openshift-machine-api Example output baremetalhost.metal3.io "openshift-control-plane-2" deleted Delete the machine of the unhealthy member by running the following command: USD oc delete machine -n openshift-machine-api examplecluster-control-plane-2 After you remove the BareMetalHost and Machine objects, then the Machine controller automatically deletes the Node object. If deletion of the machine is delayed for any reason or the command is obstructed and delayed, you can force deletion by removing the machine object finalizer field. Important Do not interrupt machine deletion by pressing Ctrl+c . You must allow the command to proceed to completion. Open a new terminal window to edit and delete the finalizer fields. A new machine is automatically provisioned after deleting the machine of the unhealthy member. Edit the machine configuration by running the following command: USD oc edit machine -n openshift-machine-api examplecluster-control-plane-2 Delete the following fields in the Machine custom resource, and then save the updated file: finalizers: - machine.machine.openshift.io Example output machine.machine.openshift.io/examplecluster-control-plane-2 edited Verify that the machine was deleted by running the following command: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned Verify that the node has been deleted by running the following command: USD oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 3h24m v1.26.0 openshift-control-plane-1 Ready master 3h24m v1.26.0 openshift-compute-0 Ready worker 176m v1.26.0 openshift-compute-1 Ready worker 176m v1.26.0 Create the new BareMetalHost object and the secret to store the BMC credentials: USD cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openshift-control-plane-2-bmc-secret namespace: openshift-machine-api data: password: <password> username: <username> type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-control-plane-2 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: redfish://10.46.61.18:443/redfish/v1/Systems/1 credentialsName: openshift-control-plane-2-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:b0:8a:a0 bootMode: UEFI externallyProvisioned: false online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: master-user-data-managed namespace: openshift-machine-api EOF Note The username and password can be found from the other bare metal host's secrets. The protocol to use in bmc:address can be taken from other bmh objects. Important If you reuse the BareMetalHost object definition from an existing control plane host, do not leave the externallyProvisioned field set to true . Existing control plane BareMetalHost objects may have the externallyProvisioned flag set to true if they were provisioned by the OpenShift Container Platform installation program. After the inspection is complete, the BareMetalHost object is created and available to be provisioned. Verify the creation process using available BareMetalHost objects: USD oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 available examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m Verify that a new machine has been created: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned 1 The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running . It should take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Verify that the bare metal host becomes provisioned and no error reported by running the following command: USD oc get bmh -n openshift-machine-api Example output USD oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 provisioned examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m Verify that the new node is added and in a ready state by running this command: USD oc get nodes Example output USD oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 4h26m v1.26.0 openshift-control-plane-1 Ready master 4h26m v1.26.0 openshift-control-plane-2 Ready master 12m v1.26.0 openshift-compute-0 Ready worker 3h58m v1.26.0 openshift-compute-1 Ready worker 3h58m v1.26.0 Turn the quorum guard back on by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by entering this command: USD oc get etcd/cluster -oyaml If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator: Example output EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again] Verification Verify that all etcd pods are running properly. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-openshift-control-plane-0 5/5 Running 0 105m etcd-openshift-control-plane-1 5/5 Running 0 107m etcd-openshift-control-plane-2 5/5 Running 0 103m If the output from the command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. To verify there are exactly three etcd members, connect to the running etcd container, passing in the name of a pod that was not on the affected node. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-openshift-control-plane-0 View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380 | https://192.168.10.10:2379 | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380 | https://192.168.10.9:2379 | false | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ Note If the output from the command lists more than three etcd members, you must carefully remove the unwanted member. Verify that all etcd members are healthy by running the following command: # etcdctl endpoint health --cluster Example output https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203ms Validate that all nodes are at the latest revision by running the following command: USD oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' 5.2.5. Additional resources Quorum protection with machine lifecycle hooks 5.3. Disaster recovery 5.3.1. About disaster recovery The disaster recovery documentation provides information for administrators on how to recover from several disaster situations that might occur with their OpenShift Container Platform cluster. As an administrator, you might need to follow one or more of the following procedures to return your cluster to a working state. Important Disaster recovery requires you to have at least one healthy control plane host. Restoring to a cluster state This solution handles situations where you want to restore your cluster to a state, for example, if an administrator deletes something critical. This also includes situations where you have lost the majority of your control plane hosts, leading to etcd quorum loss and the cluster going offline. As long as you have taken an etcd backup, you can follow this procedure to restore your cluster to a state. If applicable, you might also need to recover from expired control plane certificates . Warning Restoring to a cluster state is a destructive and destablizing action to take on a running cluster. This procedure should only be used as a last resort. Prior to performing a restore, see About restoring cluster state for more information on the impact to the cluster. Note If you have a majority of your masters still available and have an etcd quorum, then follow the procedure to replace a single unhealthy etcd member . Recovering from expired control plane certificates This solution handles situations where your control plane certificates have expired. For example, if you shut down your cluster before the first certificate rotation, which occurs 24 hours after installation, your certificates will not be rotated and will expire. You can follow this procedure to recover from expired control plane certificates. 5.3.2. Restoring to a cluster state To restore the cluster to a state, you must have previously backed up etcd data by creating a snapshot. You will use this snapshot to restore the cluster state. 5.3.2.1. About restoring cluster state You can use an etcd backup to restore your cluster to a state. This can be used to recover from the following situations: The cluster has lost the majority of control plane hosts (quorum loss). An administrator has deleted something critical and must restore to recover the cluster. Warning Restoring to a cluster state is a destructive and destablizing action to take on a running cluster. This should only be used as a last resort. If you are able to retrieve data using the Kubernetes API server, then etcd is available and you should not restore using an etcd backup. Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, SDN controllers, and persistent volume controllers. It can cause Operator churn when the content in etcd does not match the actual content on disk, causing Operators for the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, and etcd to get stuck when files on disk conflict with content in etcd. This can require manual actions to resolve the issues. In extreme cases, the cluster can lose track of persistent volumes, delete critical workloads that no longer exist, reimage machines, and rewrite CA bundles with expired certificates. 5.3.2.2. Restoring to a cluster state You can use a saved etcd backup to restore a cluster state or restore a cluster that has lost the majority of control plane hosts. Note If your cluster uses a control plane machine set, see "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. Important When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.7.2 cluster must use an etcd backup that was taken from 4.7.2. Prerequisites Access to the cluster as a user with the cluster-admin role through a certificate-based kubeconfig file, like the one that was used during installation. A healthy control plane host to use as the recovery host. SSH access to control plane hosts. A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz . Important For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one. Procedure Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on. Establish SSH connectivity to each of the control plane nodes, including the recovery host. The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal. Important If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state. Copy the etcd backup directory to the recovery control plane host. This procedure assumes that you copied the backup directory containing the etcd snapshot and the resources for the static pods to the /home/core/ directory of your recovery control plane host. Stop the static pods on any other control plane nodes. Note You do not need to stop the static pods on the recovery host. Access a control plane host that is not the recovery host. Move the existing etcd pod file out of the kubelet manifest directory: USD sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp Verify that the etcd pods are stopped. USD sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard" The output of this command should be empty. If it is not empty, wait a few minutes and check again. Move the existing Kubernetes API server pod file out of the kubelet manifest directory: USD sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp Verify that the Kubernetes API server pods are stopped. USD sudo crictl ps | grep kube-apiserver | egrep -v "operator|guard" The output of this command should be empty. If it is not empty, wait a few minutes and check again. Move the etcd data directory to a different location: USD sudo mv -v /var/lib/etcd/ /tmp If the /etc/kubernetes/manifests/keepalived.yaml file exists and the node is deleted, follow these steps: Move the /etc/kubernetes/manifests/keepalived.yaml file out of the kubelet manifest directory: USD sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp Verify that any containers managed by the keepalived daemon are stopped: USD sudo crictl ps --name keepalived The output of this command should be empty. If it is not empty, wait a few minutes and check again. Check if the control plane has any Virtual IPs (VIPs) assigned to it: USD ip -o address | egrep '<api_vip>|<ingress_vip>' For each reported VIP, run the following command to remove it: USD sudo ip address del <reported_vip> dev <reported_vip_device> Repeat this step on each of the other control plane hosts that is not the recovery host. Access the recovery control plane host. If the keepalived daemon is in use, verify that the recovery control plane node owns the VIP: USD ip -o address | grep <api_vip> The address of the VIP is highlighted in the output if it exists. This command returns an empty string if the VIP is not set or configured incorrectly. If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory: USD sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup Example script output ...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml Note The restore process can cause nodes to enter the NotReady state if the node certificates were updated after the last etcd backup. Check the nodes to ensure they are in the Ready state. Run the following command: USD oc get nodes -w Sample output NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.26.0 host-172-25-75-38 Ready infra,worker 3d20h v1.26.0 host-172-25-75-40 Ready master 3d20h v1.26.0 host-172-25-75-65 Ready master 3d20h v1.26.0 host-172-25-75-74 Ready infra,worker 3d20h v1.26.0 host-172-25-75-79 Ready worker 3d20h v1.26.0 host-172-25-75-86 Ready worker 3d20h v1.26.0 host-172-25-75-98 Ready infra,worker 3d20h v1.26.0 It can take several minutes for all nodes to report their state. If any nodes are in the NotReady state, log in to the nodes and remove all of the PEM files from the /var/lib/kubelet/pki directory on each node. You can SSH into the nodes or use the terminal window in the web console. USD ssh -i <ssh-key-path> core@<master-hostname> Sample pki directory sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem Restart the kubelet service on all control plane hosts. From the recovery host, run the following command: USD sudo systemctl restart kubelet.service Repeat this step on all other control plane hosts. Approve the pending CSRs: Note Clusters with no worker nodes, such as single-node clusters or clusters consisting of three schedulable control plane nodes, will not have any pending CSRs to approve. You can skip all the commands listed in this step. Get the list of current CSRs: USD oc get csr Example output 1 2 A pending kubelet service CSR (for user-provisioned installations). 3 4 A pending node-bootstrapper CSR. Review the details of a CSR to verify that it is valid: USD oc describe csr <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. Approve each valid node-bootstrapper CSR: USD oc adm certificate approve <csr_name> For user-provisioned installations, approve each valid kubelet service CSR: USD oc adm certificate approve <csr_name> Verify that the single member control plane has started successfully. From the recovery host, verify that the etcd container is running. USD sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard" Example output 3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0 From the recovery host, verify that the etcd pod is running. USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s If the status is Pending , or the output lists more than one running etcd pod, wait a few minutes and check again. If you are using the OVNKubernetes network plugin, delete the node objects that are associated with control plane hosts that are not the recovery control plane host. USD oc delete node <non-recovery-controlplane-host-1> <non-recovery-controlplane-host-2> Verify that the Cluster Network Operator (CNO) redeploys the OVN-Kubernetes control plane and that it no longer references the non-recovery controller IP addresses. To verify this result, regularly check the output of the following command. Wait until it returns an empty result before you proceed to restart the Open Virtual Network (OVN) Kubernetes pods on all of the hosts in the step. USD oc -n openshift-ovn-kubernetes get ds/ovnkube-master -o yaml | grep -E '<non-recovery_controller_ip_1>|<non-recovery_controller_ip_2>' Note It can take at least 5-10 minutes for the OVN-Kubernetes control plane to be redeployed and the command to return empty output. If you are using the OVN-Kubernetes network plugin, restart the Open Virtual Network (OVN) Kubernetes pods on all of the hosts. Note Validating and mutating admission webhooks can reject pods. If you add any additional webhooks with the failurePolicy set to Fail , then they can reject pods and the restoration process can fail. You can avoid this by saving and deleting webhooks while restoring the cluster state. After the cluster state is restored successfully, you can enable the webhooks again. Alternatively, you can temporarily set the failurePolicy to Ignore while restoring the cluster state. After the cluster state is restored successfully, you can set the failurePolicy to Fail . Remove the northbound database (nbdb) and southbound database (sbdb). Access the recovery host and the remaining control plane nodes by using Secure Shell (SSH) and run the following command: USD sudo rm -f /var/lib/ovn/etc/*.db Delete all OVN-Kubernetes control plane pods by running the following command: USD oc delete pods -l app=ovnkube-master -n openshift-ovn-kubernetes Ensure that any OVN-Kubernetes control plane pods are deployed again and are in a Running state by running the following command: USD oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes Example output NAME READY STATUS RESTARTS AGE ovnkube-master-nb24h 4/4 Running 0 48s Delete all ovnkube-node pods by running the following command: USD oc get pods -n openshift-ovn-kubernetes -o name | grep ovnkube-node | while read p ; do oc delete USDp -n openshift-ovn-kubernetes ; done Check the status of the OVN pods by running the following command: USD oc get po -n openshift-ovn-kubernetes If any OVN pods are in the Terminating status, delete the node that is running that OVN pod by running the following command. Replace <node> with the name of the node you are deleting: USD oc delete node <node> Use SSH to log in to the OVN pod node with the Terminating status by running the following command: USD ssh -i <ssh-key-path> core@<node> Move all PEM files from the /var/lib/kubelet/pki directory by running the following command: USD sudo mv /var/lib/kubelet/pki/* /tmp Restart the kubelet service by running the following command: USD sudo systemctl restart kubelet.service Return to the recovery etcd machines by running the following command: USD oc get csr Example output NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending Approve all new CSRs by running the following command, replacing csr-<uuid> with the name of the CSR: oc adm certificate approve csr-<uuid> Verify that the node is back by running the following command: USD oc get nodes Ensure that all the ovnkube-node pods are deployed again and are in a Running state by running the following command: USD oc get pods -n openshift-ovn-kubernetes | grep ovnkube-node Delete and re-create other non-recovery, control plane machines, one by one. After the machines are re-created, a new revision is forced and etcd automatically scales up. If you use a user-provisioned bare metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal". Warning Do not delete and re-create the machine for the recovery host. If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps: Warning Do not delete and re-create the machine for the recovery host. For bare metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node". Obtain the machine for one of the lost control plane hosts. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 This is the control plane machine for the lost control plane host, ip-10-0-131-183.ec2.internal . Delete the machine of the lost control plane host by running: USD oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1 1 Specify the name of the control plane machine for the lost control plane host. A new machine is automatically provisioned after deleting the machine of the lost control plane host. Verify that a new machine has been created by running: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running . It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Repeat these steps for each lost control plane host that is not the recovery host. Turn off the quorum guard by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. In a separate terminal window within the recovery host, export the recovery kubeconfig file by running the following command: USD export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig Force etcd redeployment. In the same terminal window where you exported the recovery kubeconfig file, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up. Turn the quorum guard back on by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by entering this command: USD oc get etcd/cluster -oyaml Verify all nodes are updated to the latest revision. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for etcd to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. After etcd is redeployed, force new rollouts for the control plane. The Kubernetes API server will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer. In a terminal that has access to the cluster as a cluster-admin user, run the following commands. Force a new rollout for the Kubernetes API server: USD oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Force a new rollout for the Kubernetes controller manager: USD oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision. USD oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Force a new rollout for the Kubernetes scheduler: USD oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision. USD oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Verify that all control plane hosts have started and joined the cluster. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h To ensure that all workloads return to normal operation following a recovery procedure, restart each pod that stores Kubernetes API information. This includes OpenShift Container Platform components such as routers, Operators, and third-party components. Note On completion of the procedural steps, you might need to wait a few minutes for all services to return to their restored state. For example, authentication by using oc login might not immediately work until the OAuth server pods are restarted. Consider using the system:admin kubeconfig file for immediate authentication. This method basis its authentication on SSL/TLS client certificates as against OAuth tokens. You can authenticate with this file by issuing the following command: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig Issue the following command to display your authenticated user name: USD oc whoami 5.3.2.3. Additional resources Installing a user-provisioned cluster on bare metal Creating a bastion host to access OpenShift Container Platform instances and the control plane nodes with SSH Replacing a bare-metal control plane node 5.3.2.4. Issues and workarounds for restoring a persistent storage state If your OpenShift Container Platform cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet object. When you restore from an etcd backup, the status of the workloads in OpenShift Container Platform is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated. Important The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OpenShift Container Platform cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa. The following are some example scenarios that produce an out-of-date status: MySQL database is running in a pod backed up by a PV object. Restoring OpenShift Container Platform from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume. Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OpenShift Container Platform is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start. Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators. A device is removed or renamed from OpenShift Container Platform nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from /dev/disk/by-id or /dev directories. This situation might cause the local PVs to refer to devices that no longer exist. To fix this problem, an administrator must: Manually remove the PVs with invalid devices. Remove symlinks from respective nodes. Delete LocalVolume or LocalVolumeSet objects (see Storage Configuring persistent storage Persistent storage using local volumes Deleting the Local Storage Operator Resources ). 5.3.3. Recovering from expired control plane certificates 5.3.3.1. Recovering from expired control plane certificates The cluster can automatically recover from expired control plane certificates. However, you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. For user-provisioned installations, you might also need to approve pending kubelet serving CSRs. Use the following steps to approve the pending CSRs: Procedure Get the list of current CSRs: USD oc get csr Example output 1 A pending kubelet service CSR (for user-provisioned installations). 2 A pending node-bootstrapper CSR. Review the details of a CSR to verify that it is valid: USD oc describe csr <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. Approve each valid node-bootstrapper CSR: USD oc adm certificate approve <csr_name> For user-provisioned installations, approve each valid kubelet serving CSR: USD oc adm certificate approve <csr_name>
[ "oc debug --as-root node/<node_name>", "sh-4.4# chroot /host", "export HTTP_PROXY=http://<your_proxy.example.com>:8080", "export HTTPS_PROXY=https://<your_proxy.example.com>:8080", "export NO_PROXY=<example.com>", "sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup", "found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup", "oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"EtcdMembersAvailable\")]}{.message}{\"\\n\"}'", "2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy", "oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{\"\\t\"}{@.status.providerStatus.instanceState}{\"\\n\"}' | grep -v running", "ip-10-0-131-183.ec2.internal stopped 1", "oc get nodes -o jsonpath='{range .items[*]}{\"\\n\"}{.metadata.name}{\"\\t\"}{range .spec.taints[*]}{.key}{\" \"}' | grep unreachable", "ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable 1", "oc get nodes -l node-role.kubernetes.io/master | grep \"NotReady\"", "ip-10-0-131-183.ec2.internal NotReady master 122m v1.26.0 1", "oc get nodes -l node-role.kubernetes.io/master", "NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.26.0 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.26.0 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.26.0", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m 1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m", "oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal", "sh-4.2# etcdctl member list -w table", "+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 6fc1e7c9db35841d | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+", "sh-4.2# etcdctl member remove 6fc1e7c9db35841d", "Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346", "sh-4.2# etcdctl member list -w table", "+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "oc delete node <node_name>", "oc delete node ip-10-0-131-183.ec2.internal", "oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1", "etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m", "oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal", "oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal", "oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'", "oc get etcd/cluster -oyaml", "EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal", "sh-4.2# etcdctl member list -w table", "+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 5eb0d6b8ca24730c | started | ip-10-0-133-53.ec2.internal | https://10.0.133.53:2380 | https://10.0.133.53:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+", "oc debug node/ip-10-0-131-183.ec2.internal 1", "sh-4.2# chroot /host", "sh-4.2# mkdir /var/lib/etcd-backup", "sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/", "sh-4.2# mv /var/lib/etcd/ /tmp", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m", "oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal", "sh-4.2# etcdctl member list -w table", "+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 62bcf33650a7170a | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+", "sh-4.2# etcdctl member remove 62bcf33650a7170a", "Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346", "sh-4.2# etcdctl member list -w table", "+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1", "etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m", "oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal", "oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal", "oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"single-master-recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'", "oc get etcd/cluster -oyaml", "EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]", "oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal", "sh-4.2# etcdctl endpoint health", "https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms", "oc -n openshift-etcd get pods -l k8s-app=etcd -o wide", "etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none> etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none> etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none>", "oc rsh -n openshift-etcd etcd-openshift-control-plane-0", "sh-4.2# etcdctl member list -w table", "+------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380/ | https://192.168.10.9:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+", "sh-4.2# etcdctl member remove 7a8197040a5126c8", "Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1b", "sh-4.2# etcdctl member list -w table", "+------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | cc3830a72fc357f9 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "oc get secrets -n openshift-etcd | grep openshift-control-plane-2", "etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134m", "oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd secret \"etcd-peer-openshift-control-plane-2\" deleted", "oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-metrics-openshift-control-plane-2\" deleted", "oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-openshift-control-plane-2\" deleted", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned", "oc get clusteroperator baremetal", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.13.0 True False False 3d15h", "oc delete bmh openshift-control-plane-2 -n openshift-machine-api", "baremetalhost.metal3.io \"openshift-control-plane-2\" deleted", "oc delete machine -n openshift-machine-api examplecluster-control-plane-2", "oc edit machine -n openshift-machine-api examplecluster-control-plane-2", "finalizers: - machine.machine.openshift.io", "machine.machine.openshift.io/examplecluster-control-plane-2 edited", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned", "oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 3h24m v1.26.0 openshift-control-plane-1 Ready master 3h24m v1.26.0 openshift-compute-0 Ready worker 176m v1.26.0 openshift-compute-1 Ready worker 176m v1.26.0", "cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openshift-control-plane-2-bmc-secret namespace: openshift-machine-api data: password: <password> username: <username> type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-control-plane-2 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: redfish://10.46.61.18:443/redfish/v1/Systems/1 credentialsName: openshift-control-plane-2-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:b0:8a:a0 bootMode: UEFI externallyProvisioned: false online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: master-user-data-managed namespace: openshift-machine-api EOF", "oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 available examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned", "oc get bmh -n openshift-machine-api", "oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 provisioned examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m", "oc get nodes", "oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 4h26m v1.26.0 openshift-control-plane-1 Ready master 4h26m v1.26.0 openshift-control-plane-2 Ready master 12m v1.26.0 openshift-compute-0 Ready worker 3h58m v1.26.0 openshift-compute-1 Ready worker 3h58m v1.26.0", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'", "oc get etcd/cluster -oyaml", "EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-openshift-control-plane-0 5/5 Running 0 105m etcd-openshift-control-plane-1 5/5 Running 0 107m etcd-openshift-control-plane-2 5/5 Running 0 103m", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc rsh -n openshift-etcd etcd-openshift-control-plane-0", "sh-4.2# etcdctl member list -w table", "+------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380 | https://192.168.10.10:2379 | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380 | https://192.168.10.9:2379 | false | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+", "etcdctl endpoint health --cluster", "https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203ms", "oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision", "sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp", "sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"", "sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp", "sudo crictl ps | grep kube-apiserver | egrep -v \"operator|guard\"", "sudo mv -v /var/lib/etcd/ /tmp", "sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp", "sudo crictl ps --name keepalived", "ip -o address | egrep '<api_vip>|<ingress_vip>'", "sudo ip address del <reported_vip> dev <reported_vip_device>", "ip -o address | grep <api_vip>", "sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup", "...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml", "oc get nodes -w", "NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.26.0 host-172-25-75-38 Ready infra,worker 3d20h v1.26.0 host-172-25-75-40 Ready master 3d20h v1.26.0 host-172-25-75-65 Ready master 3d20h v1.26.0 host-172-25-75-74 Ready infra,worker 3d20h v1.26.0 host-172-25-75-79 Ready worker 3d20h v1.26.0 host-172-25-75-86 Ready worker 3d20h v1.26.0 host-172-25-75-98 Ready infra,worker 3d20h v1.26.0", "ssh -i <ssh-key-path> core@<master-hostname>", "sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem", "sudo systemctl restart kubelet.service", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc adm certificate approve <csr_name>", "sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"", "3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0", "oc -n openshift-etcd get pods -l k8s-app=etcd", "NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s", "oc delete node <non-recovery-controlplane-host-1> <non-recovery-controlplane-host-2>", "oc -n openshift-ovn-kubernetes get ds/ovnkube-master -o yaml | grep -E '<non-recovery_controller_ip_1>|<non-recovery_controller_ip_2>'", "sudo rm -f /var/lib/ovn/etc/*.db", "oc delete pods -l app=ovnkube-master -n openshift-ovn-kubernetes", "oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes", "NAME READY STATUS RESTARTS AGE ovnkube-master-nb24h 4/4 Running 0 48s", "oc get pods -n openshift-ovn-kubernetes -o name | grep ovnkube-node | while read p ; do oc delete USDp -n openshift-ovn-kubernetes ; done", "oc get po -n openshift-ovn-kubernetes", "oc delete node <node>", "ssh -i <ssh-key-path> core@<node>", "sudo mv /var/lib/kubelet/pki/* /tmp", "sudo systemctl restart kubelet.service", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending", "adm certificate approve csr-<uuid>", "oc get nodes", "oc get pods -n openshift-ovn-kubernetes | grep ovnkube-node", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'", "oc get etcd/cluster -oyaml", "oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h", "export KUBECONFIG=<installation_directory>/auth/kubeconfig", "oc whoami", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 2 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc adm certificate approve <csr_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/backup_and_restore/control-plane-backup-and-restore
7.224. seabios
7.224. seabios 7.224.1. RHBA-2013:0307 - seabios bug fix and enhancement update Updated seabios packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The seabios packages contain an open-source legacy BIOS implementation which can be used as a coreboot payload. It implements the standard BIOS calling interfaces that a typical x86 proprietary BIOS implements. Bug Fixes BZ# 771616 In the QXL-VGA drive, the ram_size and vram_size variables were set to a default value that was too high. Consequently, the guest was not able to boot, and the "VM status: paused (internal-error)" message was returned. This update uses extended addressing for PCI address space and the guest can now boot successfully. BZ# 839674 Previously, the advertisement of S3 and S4 states in the default BIOS was disabled for which a separate BIOS binary file had been created. This update enables users to configurate S3 and S4 states per virtual machine in seabios and thus, the extra BIOS binary file is no longer necessary. Now, a single binary is used to enable these states. BZ# 851245 Prior to this update, the SeaBIOS component did not support the non-contiguous APIC IDs. This resulted in incorrect topology generation on SMP and NUMA systems; moreover, QEMU-KVM was unable to run on some of the host systems. A patch has been provided to fix this bug and Seabios now supports the non-contiguous APIC IDs. BZ# 854448 The seabios packages used the time-stamp counter (TSC) for timekeeping with a simple calibration loop. As a consequence, on a busy host, the magnitude calibration could be set incorrectly and could lead to boot failures. This update provides the power management timer (PMT) with a fixed frequency, which does not suffer from calibration errors due to a loaded host machine. As a result, timeouts work correctly under all circumstances. Enhancements BZ#827500 With this update, it is possible to configurate S3 and S4 states per virtual machine. BZ#831273 The seabios packages are now able to reboot a VM even if no bootable device can be found. Users of seabios are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/seabios
function::is_myproc
function::is_myproc Name function::is_myproc - Determines if the current probe point has occurred in the user's own process Synopsis Arguments None Description This function returns 1 if the current probe point has occurred in the user's own process.
[ "is_myproc:long()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-is-myproc
Chapter 1. Overview
Chapter 1. Overview Red Hat build of Apache Qpid Proton DotNet is a lightweight AMQP 1.0 library for the .NET platform. It enables you to write .NET applications that send and receive AMQP messages. Red Hat build of Apache Qpid Proton DotNet is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For information on available clients, see AMQ Clients . 1.1. Key features SSL/TLS for secure communication Flexible SASL authentication Seamless conversion between AMQP and native data types Access to all the features and capabilities of AMQP 1.0 An integrated development environment with full IntelliSense API documentation 1.2. Supported standards and protocols Red Hat build of Apache Qpid Proton DotNet supports the following industry-recognized standards and network protocols: Version 1.0 of the Advanced Message Queueing Protocol (AMQP) Versions 1.0, 1.1, 1.2, and 1.3 of the Transport Layer Security (TLS) protocol, the successor to SSL Simple Authentication and Security Layer (SASL) mechanisms ANONYMOUS, PLAIN, and EXTERNAL Modern TCP with IPv6 1.3. Supported configurations Refer to Red Hat AMQ Supported Configurations on the Red Hat Customer Portal for current information regarding Red Hat build of Apache Qpid Proton DotNet supported configurations. 1.4. Terms and concepts This section introduces the core API entities and describes how they operate together. Table 1.1. API terms Entity Description Client A context for creating Connections to Brokers Connection A channel for communication between two peers on a network Session A context for sending and receiving messages Sender A channel for sending messages to a target Receiver A channel for receiving messages from a source Delivery A Delivery received from a IReceiver that can be acted upon and contains a message Message A mutable holder of application data Red Hat build of Apache Qpid Proton DotNet sends and receives Messages . Messages are transferred to Brokers via Senders and from Brokers using Receivers . Received Messages are wrapped in a Delivery which can be Accepted, Rejected or Released. Senders and Receivers are created using Connections which are in turn created using Clients . Sessions are established over Connections . 1.5. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir>
[ "cd <project-dir>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_proton_dotnet/1.0/html/using_qpid_proton_dotnet/overview
Chapter 2. Event sources
Chapter 2. Event sources 2.1. Event sources A Knative event source can be any Kubernetes object that generates or imports cloud events, and relays those events to another endpoint, known as a sink . Sourcing events is critical to developing a distributed system that reacts to events. You can create and manage Knative event sources by using the Developer perspective in the OpenShift Container Platform web console, the Knative ( kn ) CLI, or by applying YAML files. Currently, OpenShift Serverless supports the following event source types: API server source Brings Kubernetes API server events into Knative. The API server source sends a new event each time a Kubernetes resource is created, updated or deleted. Ping source Produces events with a fixed payload on a specified cron schedule. Kafka event source Connects an Apache Kafka cluster to a sink as an event source. You can also create a custom event source . 2.2. Event source in the Administrator perspective Sourcing events is critical to developing a distributed system that reacts to events. 2.2.1. Creating an event source by using the Administrator perspective A Knative event source can be any Kubernetes object that generates or imports cloud events, and relays those events to another endpoint, known as a sink . Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Administrator perspective. You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Serverless Eventing . In the Create list, select Event Source . You will be directed to the Event Sources page. Select the event source type that you want to create. 2.3. Creating an API server source The API server source is an event source that can be used to connect an event sink, such as a Knative service, to the Kubernetes API server. The API server source watches for Kubernetes events and forwards them to the Knative Eventing broker. 2.3.1. Creating an API server source by using the web console After Knative Eventing is installed on your cluster, you can create an API server source by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create an event source. Prerequisites You have logged in to the OpenShift Container Platform web console. The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). Procedure If you want to re-use an existing service account, you can modify your existing ServiceAccount resource to include the required permissions instead of creating a new resource. Create a service account, role, and role binding for the event source as a YAML file: apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - "" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4 1 2 3 4 Change this namespace to the namespace that you have selected for installing the event source. Apply the YAML file: USD oc apply -f <filename> In the Developer perspective, navigate to +Add Event Source . The Event Sources page is displayed. Optional: If you have multiple providers for your event sources, select the required provider from the Providers list to filter the available event sources from the provider. Select ApiServerSource and then click Create Event Source . The Create Event Source page is displayed. Configure the ApiServerSource settings by using the Form view or YAML view : Note You can switch between the Form view and YAML view . The data is persisted when switching between the views. Enter v1 as the APIVERSION and Event as the KIND . Select the Service Account Name for the service account that you created. In the Target section, select your event sink. This can be either a Resource or a URI : Select Resource to use a channel, broker, or service as an event sink for the event source. Select URI to specify a Uniform Resource Identifier (URI) where the events are routed to. Click Create . Verification After you have created the API server source, check that it is connected to the event sink by viewing it in the Topology view. Note If a URI sink is used, you can modify the URI by right-clicking on URI sink Edit URI . Deleting the API server source Navigate to the Topology view. Right-click the API server source and select Delete ApiServerSource . 2.3.2. Creating an API server source by using the Knative CLI You can use the kn source apiserver create command to create an API server source by using the kn CLI. Using the kn CLI to create an API server source provides a more streamlined and intuitive user interface than modifying YAML files directly. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). You have installed the Knative ( kn ) CLI. Procedure If you want to re-use an existing service account, you can modify your existing ServiceAccount resource to include the required permissions instead of creating a new resource. Create a service account, role, and role binding for the event source as a YAML file: apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - "" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4 1 2 3 4 Change this namespace to the namespace that you have selected for installing the event source. Apply the YAML file: USD oc apply -f <filename> Create an API server source that has an event sink. In the following example, the sink is a broker: USD kn source apiserver create <event_source_name> --sink broker:<broker_name> --resource "event:v1" --service-account <service_account_name> --mode Resource To check that the API server source is set up correctly, create a Knative service that dumps incoming messages to its log: USD kn service create event-display --image quay.io/openshift-knative/showcase If you used a broker as an event sink, create a trigger to filter events from the default broker to the service: USD kn trigger create <trigger_name> --sink ksvc:event-display Create events by launching a pod in the default namespace: USD oc create deployment event-origin --image quay.io/openshift-knative/showcase Check that the controller is mapped correctly by inspecting the output generated by the following command: USD kn source apiserver describe <source_name> Example output Name: mysource Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 3m ServiceAccountName: events-sa Mode: Resource Sink: Name: default Namespace: default Kind: Broker (eventing.knative.dev/v1) Resources: Kind: event (v1) Controller: false Conditions: OK TYPE AGE REASON ++ Ready 3m ++ Deployed 3m ++ SinkProvided 3m ++ SufficientPermissions 3m ++ EventTypesProvided 3m Verification To verify that the Kubernetes events were sent to Knative, look at the event-display logs or use web browser to see the events. To view the events in a web browser, open the link returned by the following command: USD kn service describe event-display -o url Figure 2.1. Example browser page Alternatively, to see the logs in the terminal, view the event-display logs for the pods by entering the following command: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json ... Data, { "apiVersion": "v1", "involvedObject": { "apiVersion": "v1", "fieldPath": "spec.containers{event-origin}", "kind": "Pod", "name": "event-origin", "namespace": "default", ..... }, "kind": "Event", "message": "Started container", "metadata": { "name": "event-origin.159d7608e3a3572c", "namespace": "default", .... }, "reason": "Started", ... } Deleting the API server source Delete the trigger: USD kn trigger delete <trigger_name> Delete the event source: USD kn source apiserver delete <source_name> Delete the service account, cluster role, and cluster binding: USD oc delete -f authentication.yaml 2.3.2.1. Knative CLI sink flag When you create an event source by using the Knative ( kn ) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources. The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local , as the sink: Example command using the sink flag USD kn source binding create bind-heartbeat \ --namespace sinkbinding-example \ --subject "Job:batch/v1:app=heartbeat-cron" \ --sink http://event-display.svc.cluster.local \ 1 --ce-override "sink=bound" 1 svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel , and broker . 2.3.3. Creating an API server source by using YAML files Creating Knative resources by using YAML files uses a declarative API, which enables you to describe event sources declaratively and in a reproducible manner. To create an API server source by using YAML, you must create a YAML file that defines an ApiServerSource object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created the default broker in the same namespace as the one defined in the API server source YAML file. Install the OpenShift CLI ( oc ). Procedure If you want to re-use an existing service account, you can modify your existing ServiceAccount resource to include the required permissions instead of creating a new resource. Create a service account, role, and role binding for the event source as a YAML file: apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - "" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4 1 2 3 4 Change this namespace to the namespace that you have selected for installing the event source. Apply the YAML file: USD oc apply -f <filename> Create an API server source as a YAML file: apiVersion: sources.knative.dev/v1alpha1 kind: ApiServerSource metadata: name: testevents spec: serviceAccountName: events-sa mode: Resource resources: - apiVersion: v1 kind: Event sink: ref: apiVersion: eventing.knative.dev/v1 kind: Broker name: default Apply the ApiServerSource YAML file: USD oc apply -f <filename> To check that the API server source is set up correctly, create a Knative service as a YAML file that dumps incoming messages to its log: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display namespace: default spec: template: spec: containers: - image: quay.io/openshift-knative/showcase Apply the Service YAML file: USD oc apply -f <filename> Create a Trigger object as a YAML file that filters events from the default broker to the service created in the step: apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: event-display-trigger namespace: default spec: broker: default subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display Apply the Trigger YAML file: USD oc apply -f <filename> Create events by launching a pod in the default namespace: USD oc create deployment event-origin --image=quay.io/openshift-knative/showcase Check that the controller is mapped correctly, by entering the following command and inspecting the output: USD oc get apiserversource.sources.knative.dev testevents -o yaml Example output apiVersion: sources.knative.dev/v1alpha1 kind: ApiServerSource metadata: annotations: creationTimestamp: "2020-04-07T17:24:54Z" generation: 1 name: testevents namespace: default resourceVersion: "62868" selfLink: /apis/sources.knative.dev/v1alpha1/namespaces/default/apiserversources/testevents2 uid: 1603d863-bb06-4d1c-b371-f580b4db99fa spec: mode: Resource resources: - apiVersion: v1 controller: false controllerSelector: apiVersion: "" kind: "" name: "" uid: "" kind: Event labelSelector: {} serviceAccountName: events-sa sink: ref: apiVersion: eventing.knative.dev/v1 kind: Broker name: default Verification To verify that the Kubernetes events were sent to Knative, you can look at the event-display logs or use web browser to see the events. To view the events in a web browser, open the link returned by the following command: USD oc get ksvc event-display -o jsonpath='{.status.url}' Figure 2.2. Example browser page To see the logs in the terminal, view the event-display logs for the pods by entering the following command: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json ... Data, { "apiVersion": "v1", "involvedObject": { "apiVersion": "v1", "fieldPath": "spec.containers{event-origin}", "kind": "Pod", "name": "event-origin", "namespace": "default", ..... }, "kind": "Event", "message": "Started container", "metadata": { "name": "event-origin.159d7608e3a3572c", "namespace": "default", .... }, "reason": "Started", ... } Deleting the API server source Delete the trigger: USD oc delete -f trigger.yaml Delete the event source: USD oc delete -f k8s-events.yaml Delete the service account, cluster role, and cluster binding: USD oc delete -f authentication.yaml 2.4. Creating a ping source A ping source is an event source that can be used to periodically send ping events with a constant payload to an event consumer. A ping source can be used to schedule sending events, similar to a timer. 2.4.1. Creating a ping source by using the web console After Knative Eventing is installed on your cluster, you can create a ping source by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create an event source. Prerequisites You have logged in to the OpenShift Container Platform web console. The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure To verify that the ping source is working, create a simple Knative service that dumps incoming messages to the logs of the service. In the Developer perspective, navigate to +Add YAML . Copy the example YAML: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/showcase Click Create . Create a ping source in the same namespace as the service created in the step, or any other sink that you want to send events to. In the Developer perspective, navigate to +Add Event Source . The Event Sources page is displayed. Optional: If you have multiple providers for your event sources, select the required provider from the Providers list to filter the available event sources from the provider. Select Ping Source and then click Create Event Source . The Create Event Source page is displayed. Note You can configure the PingSource settings by using the Form view or YAML view and can switch between the views. The data is persisted when switching between the views. Enter a value for Schedule . In this example, the value is */2 * * * * , which creates a PingSource that sends a message every two minutes. Optional: You can enter a value for Data , which is the message payload. In the Target section, select your event sink. This can be either a Resource or a URI : Select Resource to use a channel, broker, or service as an event sink for the event source. In this example, the event-display service created in the step is used as the target Resource . Select URI to specify a Uniform Resource Identifier (URI) where the events are routed to. Click Create . Verification You can verify that the ping source was created and is connected to the sink by viewing the Topology page. In the Developer perspective, navigate to Topology . View the ping source and sink. View the event-display service in the web browser. You should see the ping source events in the web UI. Deleting the ping source Navigate to the Topology view. Right-click the API server source and select Delete Ping Source . 2.4.2. Creating a ping source by using the Knative CLI You can use the kn source ping create command to create a ping source by using the Knative ( kn ) CLI. Using the Knative CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly. Prerequisites The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Optional: If you want to use the verification steps for this procedure, install the OpenShift CLI ( oc ). Procedure To verify that the ping source is working, create a simple Knative service that dumps incoming messages to the service logs: USD kn service create event-display \ --image quay.io/openshift-knative/showcase For each set of ping events that you want to request, create a ping source in the same namespace as the event consumer: USD kn source ping create test-ping-source \ --schedule "*/2 * * * *" \ --data '{"message": "Hello world!"}' \ --sink ksvc:event-display Check that the controller is mapped correctly by entering the following command and inspecting the output: USD kn source ping describe test-ping-source Example output Name: test-ping-source Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 15s Schedule: */2 * * * * Data: {"message": "Hello world!"} Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 8s ++ Deployed 8s ++ SinkProvided 15s ++ ValidSchedule 15s ++ EventTypeProvided 15s ++ ResourcesCorrect 15s Verification You can verify that the Kubernetes events were sent to the Knative event sink by looking at the logs of the sink pod. By default, Knative services terminate their pods if no traffic is received within a 60 second period. The example shown in this guide creates a ping source that sends a message every 2 minutes, so each message should be observed in a newly created pod. Watch for new pods created: USD watch oc get pods Cancel watching the pods using Ctrl+C, then look at the logs of the created pod: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 99e4f4f6-08ff-4bff-acf1-47f61ded68c9 time: 2020-04-07T16:16:00.000601161Z datacontenttype: application/json Data, { "message": "Hello world!" } Deleting the ping source Delete the ping source: USD kn delete pingsources.sources.knative.dev <ping_source_name> 2.4.2.1. Knative CLI sink flag When you create an event source by using the Knative ( kn ) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources. The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local , as the sink: Example command using the sink flag USD kn source binding create bind-heartbeat \ --namespace sinkbinding-example \ --subject "Job:batch/v1:app=heartbeat-cron" \ --sink http://event-display.svc.cluster.local \ 1 --ce-override "sink=bound" 1 svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel , and broker . 2.4.3. Creating a ping source by using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe event sources declaratively and in a reproducible manner. To create a serverless ping source by using YAML, you must create a YAML file that defines a PingSource object, then apply it by using oc apply . Example PingSource object apiVersion: sources.knative.dev/v1 kind: PingSource metadata: name: test-ping-source spec: schedule: "*/2 * * * *" 1 data: '{"message": "Hello world!"}' 2 sink: 3 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display 1 The schedule of the event specified using CRON expression . 2 The event message body expressed as a JSON encoded data string. 3 These are the details of the event consumer. In this example, we are using a Knative service named event-display . Prerequisites The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure To verify that the ping source is working, create a simple Knative service that dumps incoming messages to the service's logs. Create a service YAML file: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/showcase Create the service: USD oc apply -f <filename> For each set of ping events that you want to request, create a ping source in the same namespace as the event consumer. Create a YAML file for the ping source: apiVersion: sources.knative.dev/v1 kind: PingSource metadata: name: test-ping-source spec: schedule: "*/2 * * * *" data: '{"message": "Hello world!"}' sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display Create the ping source: USD oc apply -f <filename> Check that the controller is mapped correctly by entering the following command: USD oc get pingsource.sources.knative.dev <ping_source_name> -oyaml Example output apiVersion: sources.knative.dev/v1 kind: PingSource metadata: annotations: sources.knative.dev/creator: developer sources.knative.dev/lastModifier: developer creationTimestamp: "2020-04-07T16:11:14Z" generation: 1 name: test-ping-source namespace: default resourceVersion: "55257" selfLink: /apis/sources.knative.dev/v1/namespaces/default/pingsources/test-ping-source uid: 3d80d50b-f8c7-4c1b-99f7-3ec00e0a8164 spec: data: '{ value: "hello" }' schedule: '*/2 * * * *' sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display namespace: default Verification You can verify that the Kubernetes events were sent to the Knative event sink by looking at the sink pod's logs. By default, Knative services terminate their pods if no traffic is received within a 60 second period. The example shown in this guide creates a PingSource that sends a message every 2 minutes, so each message should be observed in a newly created pod. Watch for new pods created: USD watch oc get pods Cancel watching the pods using Ctrl+C, then look at the logs of the created pod: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 042ff529-240e-45ee-b40c-3a908129853e time: 2020-04-07T16:22:00.000791674Z datacontenttype: application/json Data, { "message": "Hello world!" } Deleting the ping source Delete the ping source: USD oc delete -f <filename> Example command USD oc delete -f ping-source.yaml 2.5. Source for Apache Kafka You can create an Apache Kafka source that reads events from an Apache Kafka cluster and passes these events to a sink. You can create a Kafka source by using the OpenShift Container Platform web console, the Knative ( kn ) CLI, or by creating a KafkaSource object directly as a YAML file and using the OpenShift CLI ( oc ) to apply it. Note See the documentation for Installing Knative broker for Apache Kafka . 2.5.1. Creating an Apache Kafka event source by using the web console After the Knative broker implementation for Apache Kafka is installed on your cluster, you can create an Apache Kafka source by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a Kafka source. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your cluster. You have logged in to the web console. You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure In the Developer perspective, navigate to the +Add page and select Event Source . In the Event Sources page, select Kafka Source in the Type section. Configure the Kafka Source settings: Add a comma-separated list of Bootstrap Servers . Add a comma-separated list of Topics . Add a Consumer Group . Select the Service Account Name for the service account that you created. In the Target section, select your event sink. This can be either a Resource or a URI : Select Resource to use a channel, broker, or service as an event sink for the event source. Select URI to specify a Uniform Resource Identifier (URI) where the events are routed to. Enter a Name for the Kafka event source. Click Create . Verification You can verify that the Kafka event source was created and is connected to the sink by viewing the Topology page. In the Developer perspective, navigate to Topology . View the Kafka event source and sink. 2.5.2. Creating an Apache Kafka event source by using the Knative CLI You can use the kn source kafka create command to create a Kafka source by using the Knative ( kn ) CLI. Using the Knative CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly. Prerequisites The OpenShift Serverless Operator, Knative Eventing, Knative Serving, and the KnativeKafka custom resource (CR) are installed on your cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import. You have installed the Knative ( kn ) CLI. Optional: You have installed the OpenShift CLI ( oc ) if you want to use the verification steps in this procedure. Procedure To verify that the Kafka event source is working, create a Knative service that dumps incoming events into the service logs: USD kn service create event-display \ --image quay.io/openshift-knative/showcase Create a KafkaSource CR: USD kn source kafka create <kafka_source_name> \ --servers <cluster_kafka_bootstrap>.kafka.svc:9092 \ --topics <topic_name> --consumergroup my-consumer-group \ --sink event-display Note Replace the placeholder values in this command with values for your source name, bootstrap servers, and topics. The --servers , --topics , and --consumergroup options specify the connection parameters to the Kafka cluster. The --consumergroup option is optional. Optional: View details about the KafkaSource CR you created: USD kn source kafka describe <kafka_source_name> Example output Name: example-kafka-source Namespace: kafka Age: 1h BootstrapServers: example-cluster-kafka-bootstrap.kafka.svc:9092 Topics: example-topic ConsumerGroup: example-consumer-group Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 1h ++ Deployed 1h ++ SinkProvided 1h Verification steps Trigger the Kafka instance to send a message to the topic: USD oc -n kafka run kafka-producer \ -ti --image=quay.io/strimzi/kafka:latest-kafka-2.7.0 --rm=true \ --restart=Never -- bin/kafka-console-producer.sh \ --broker-list <cluster_kafka_bootstrap>:9092 --topic my-topic Enter the message in the prompt. This command assumes that: The Kafka cluster is installed in the kafka namespace. The KafkaSource object has been configured to use the my-topic topic. Verify that the message arrived by viewing the logs: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.kafka.event source: /apis/v1/namespaces/default/kafkasources/example-kafka-source#example-topic subject: partition:46#0 id: partition:46/offset:0 time: 2021-03-10T11:21:49.4Z Extensions, traceparent: 00-161ff3815727d8755848ec01c866d1cd-7ff3916c44334678-00 Data, Hello! 2.5.2.1. Knative CLI sink flag When you create an event source by using the Knative ( kn ) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources. The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local , as the sink: Example command using the sink flag USD kn source binding create bind-heartbeat \ --namespace sinkbinding-example \ --subject "Job:batch/v1:app=heartbeat-cron" \ --sink http://event-display.svc.cluster.local \ 1 --ce-override "sink=bound" 1 svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel , and broker . 2.5.3. Creating an Apache Kafka event source by using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe applications declaratively and in a reproducible manner. To create a Kafka source by using YAML, you must create a YAML file that defines a KafkaSource object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import. Install the OpenShift CLI ( oc ). Procedure Create a KafkaSource object as a YAML file: apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: <source_name> spec: consumerGroup: <group_name> 1 bootstrapServers: - <list_of_bootstrap_servers> topics: - <list_of_topics> 2 sink: - <list_of_sinks> 3 1 A consumer group is a group of consumers that use the same group ID, and consume data from a topic. 2 A topic provides a destination for the storage of data. Each topic is split into one or more partitions. 3 A sink specifies where events are sent to from a source. Important Only the v1beta1 version of the API for KafkaSource objects on OpenShift Serverless is supported. Do not use the v1alpha1 version of this API, as this version is now deprecated. Example KafkaSource object apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: kafka-source spec: consumerGroup: knative-group bootstrapServers: - my-cluster-kafka-bootstrap.kafka:9092 topics: - knative-demo-topic sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display Apply the KafkaSource YAML file: USD oc apply -f <filename> Verification Verify that the Kafka event source was created by entering the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE kafkasource-kafka-source-5ca0248f-... 1/1 Running 0 13m 2.5.4. Configuring SASL authentication for Apache Kafka sources Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster; otherwise events cannot be produced or consumed. Prerequisites You have cluster or dedicated administrator permissions on OpenShift Container Platform. The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have a username and password for a Kafka cluster. You have chosen the SASL mechanism to use, for example, PLAIN , SCRAM-SHA-256 , or SCRAM-SHA-512 . If TLS is enabled, you also need the ca.crt certificate file for the Kafka cluster. You have installed the OpenShift ( oc ) CLI. Procedure Create the certificate files as secrets in your chosen namespace: USD oc create secret -n <namespace> generic <kafka_auth_secret> \ --from-file=ca.crt=caroot.pem \ --from-literal=password="SecretPassword" \ --from-literal=saslType="SCRAM-SHA-512" \ 1 --from-literal=user="my-sasl-user" 1 The SASL type can be PLAIN , SCRAM-SHA-256 , or SCRAM-SHA-512 . Create or modify your Kafka source so that it contains the following spec configuration: apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: example-source spec: ... net: sasl: enable: true user: secretKeyRef: name: <kafka_auth_secret> key: user password: secretKeyRef: name: <kafka_auth_secret> key: password type: secretKeyRef: name: <kafka_auth_secret> key: saslType tls: enable: true caCert: 1 secretKeyRef: name: <kafka_auth_secret> key: ca.crt ... 1 The caCert spec is not required if you are using a public cloud Kafka service. 2.5.5. Configuring KEDA autoscaling for KafkaSource You can configure Knative Eventing sources for Apache Kafka (KafkaSource) to be autoscaled using the Custom Metrics Autoscaler Operator, which is based on the Kubernetes Event Driven Autoscaler (KEDA). Important Configuring KEDA autoscaling for KafkaSource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource are installed on your cluster. Procedure In the KnativeKafka custom resource, enable KEDA scaling: Example YAML apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: config: kafka-features: controller-autoscaler-keda: enabled Apply the KnativeKafka YAML file: USD oc apply -f <filename> 2.6. Custom event sources If you need to ingress events from an event producer that is not included in Knative, or from a producer that emits events which are not in the CloudEvent format, you can do this by creating a custom event source. You can create a custom event source by using one of the following methods: Use a PodSpecable object as an event source, by creating a sink binding. Use a container as an event source, by creating a container source. 2.6.1. Sink binding The SinkBinding object supports decoupling event production from delivery addressing. Sink binding is used to connect event producers to an event consumer, or sink . An event producer is a Kubernetes resource that embeds a PodSpec template and produces events. A sink is an addressable Kubernetes object that can receive events. The SinkBinding object injects environment variables into the PodTemplateSpec of the sink, which means that the application code does not need to interact directly with the Kubernetes API to locate the event destination. These environment variables are as follows: K_SINK The URL of the resolved sink. K_CE_OVERRIDES A JSON object that specifies overrides to the outbound event. Note The SinkBinding object currently does not support custom revision names for services. 2.6.1.1. Creating a sink binding by using YAML Creating Knative resources by using YAML files uses a declarative API, which enables you to describe event sources declaratively and in a reproducible manner. To create a sink binding by using YAML, you must create a YAML file that defines an SinkBinding object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure To check that sink binding is set up correctly, create a Knative event display service, or event sink, that dumps incoming messages to its log. Create a service YAML file: Example service YAML file apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/showcase Create the service: USD oc apply -f <filename> Create a sink binding instance that directs events to the service. Create a sink binding YAML file: Example service YAML file apiVersion: sources.knative.dev/v1alpha1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: batch/v1 kind: Job 1 selector: matchLabels: app: heartbeat-cron sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display 1 In this example, any Job with the label app: heartbeat-cron will be bound to the event sink. Create the sink binding: USD oc apply -f <filename> Create a CronJob object. Create a cron job YAML file: Example cron job YAML file apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: "* * * * *" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: "true" spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: "true" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace Important To use sink binding, you must manually add a bindings.knative.dev/include=true label to your Knative resources. For example, to add this label to a CronJob resource, add the following lines to the Job resource YAML definition: jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: "true" Create the cron job: USD oc apply -f <filename> Check that the controller is mapped correctly by entering the following command and inspecting the output: USD oc get sinkbindings.sources.knative.dev bind-heartbeat -oyaml Example output spec: sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display namespace: default subject: apiVersion: batch/v1 kind: Job namespace: default selector: matchLabels: app: heartbeat-cron Verification You can verify that the Kubernetes events were sent to the Knative event sink by looking at the message dumper function logs. Enter the command: USD oc get pods Enter the command: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { "id": 1, "label": "" } 2.6.1.2. Creating a sink binding by using the Knative CLI You can use the kn source binding create command to create a sink binding by using the Knative ( kn ) CLI. Using the Knative CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly. Prerequisites The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Install the Knative ( kn ) CLI. Install the OpenShift CLI ( oc ). Note The following procedure requires you to create YAML files. If you change the names of the YAML files from those used in the examples, you must ensure that you also update the corresponding CLI commands. Procedure To check that sink binding is set up correctly, create a Knative event display service, or event sink, that dumps incoming messages to its log: USD kn service create event-display --image quay.io/openshift-knative/showcase Create a sink binding instance that directs events to the service: USD kn source binding create bind-heartbeat --subject Job:batch/v1:app=heartbeat-cron --sink ksvc:event-display Create a CronJob object. Create a cron job YAML file: Example cron job YAML file apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: "* * * * *" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: "true" spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: "true" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace Important To use sink binding, you must manually add a bindings.knative.dev/include=true label to your Knative CRs. For example, to add this label to a CronJob CR, add the following lines to the Job CR YAML definition: jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: "true" Create the cron job: USD oc apply -f <filename> Check that the controller is mapped correctly by entering the following command and inspecting the output: USD kn source binding describe bind-heartbeat Example output Name: bind-heartbeat Namespace: demo-2 Annotations: sources.knative.dev/creator=minikube-user, sources.knative.dev/lastModifier=minikub ... Age: 2m Subject: Resource: job (batch/v1) Selector: app: heartbeat-cron Sink: Name: event-display Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 2m Verification You can verify that the Kubernetes events were sent to the Knative event sink by looking at the message dumper function logs. View the message dumper function logs by entering the following commands: USD oc get pods USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { "id": 1, "label": "" } 2.6.1.2.1. Knative CLI sink flag When you create an event source by using the Knative ( kn ) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources. The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local , as the sink: Example command using the sink flag USD kn source binding create bind-heartbeat \ --namespace sinkbinding-example \ --subject "Job:batch/v1:app=heartbeat-cron" \ --sink http://event-display.svc.cluster.local \ 1 --ce-override "sink=bound" 1 svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel , and broker . 2.6.1.3. Creating a sink binding by using the web console After Knative Eventing is installed on your cluster, you can create a sink binding by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create an event source. Prerequisites You have logged in to the OpenShift Container Platform web console. The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Knative service to use as a sink: In the Developer perspective, navigate to +Add YAML . Copy the example YAML: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/showcase Click Create . Create a CronJob resource that is used as an event source and sends an event every minute. In the Developer perspective, navigate to +Add YAML . Copy the example YAML: apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: "*/1 * * * *" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: true 1 spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats args: - --period=1 env: - name: ONE_SHOT value: "true" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace 1 Ensure that you include the bindings.knative.dev/include: true label. The default namespace selection behavior of OpenShift Serverless uses inclusion mode. Click Create . Create a sink binding in the same namespace as the service created in the step, or any other sink that you want to send events to. In the Developer perspective, navigate to +Add Event Source . The Event Sources page is displayed. Optional: If you have multiple providers for your event sources, select the required provider from the Providers list to filter the available event sources from the provider. Select Sink Binding and then click Create Event Source . The Create Event Source page is displayed. Note You can configure the Sink Binding settings by using the Form view or YAML view and can switch between the views. The data is persisted when switching between the views. In the apiVersion field enter batch/v1 . In the Kind field enter Job . Note The CronJob kind is not supported directly by OpenShift Serverless sink binding, so the Kind field must target the Job objects created by the cron job, rather than the cron job object itself. In the Target section, select your event sink. This can be either a Resource or a URI : Select Resource to use a channel, broker, or service as an event sink for the event source. In this example, the event-display service created in the step is used as the target Resource . Select URI to specify a Uniform Resource Identifier (URI) where the events are routed to. In the Match labels section: Enter app in the Name field. Enter heartbeat-cron in the Value field. Note The label selector is required when using cron jobs with sink binding, rather than the resource name. This is because jobs created by a cron job do not have a predictable name, and contain a randomly generated string in their name. For example, hearthbeat-cron-1cc23f . Click Create . Verification You can verify that the sink binding, sink, and cron job have been created and are working correctly by viewing the Topology page and pod logs. In the Developer perspective, navigate to Topology . View the sink binding, sink, and heartbeats cron job. Observe that successful jobs are being registered by the cron job once the sink binding is added. This means that the sink binding is successfully reconfiguring the jobs created by the cron job. Browse the event-display service to see events produced by the heartbeats cron job. 2.6.1.4. Sink binding reference You can use a PodSpecable object as an event source by creating a sink binding. You can configure multiple parameters when creating a SinkBinding object. SinkBinding objects support the following parameters: Field Description Required or optional apiVersion Specifies the API version, for example sources.knative.dev/v1 . Required kind Identifies this resource object as a SinkBinding object. Required metadata Specifies metadata that uniquely identifies the SinkBinding object. For example, a name . Required spec Specifies the configuration information for this SinkBinding object. Required spec.sink A reference to an object that resolves to a URI to use as the sink. Required spec.subject References the resources for which the runtime contract is augmented by binding implementations. Required spec.ceOverrides Defines overrides to control the output format and modifications to the event sent to the sink. Optional 2.6.1.4.1. Subject parameter The Subject parameter references the resources for which the runtime contract is augmented by binding implementations. You can configure multiple fields for a Subject definition. The Subject definition supports the following fields: Field Description Required or optional apiVersion API version of the referent. Required kind Kind of the referent. Required namespace Namespace of the referent. If omitted, this defaults to the namespace of the object. Optional name Name of the referent. Do not use if you configure selector . selector Selector of the referents. Do not use if you configure name . selector.matchExpressions A list of label selector requirements. Only use one of either matchExpressions or matchLabels . selector.matchExpressions.key The label key that the selector applies to. Required if using matchExpressions . selector.matchExpressions.operator Represents a key's relationship to a set of values. Valid operators are In , NotIn , Exists and DoesNotExist . Required if using matchExpressions . selector.matchExpressions.values An array of string values. If the operator parameter value is In or NotIn , the values array must be non-empty. If the operator parameter value is Exists or DoesNotExist , the values array must be empty. This array is replaced during a strategic merge patch. Required if using matchExpressions . selector.matchLabels A map of key-value pairs. Each key-value pair in the matchLabels map is equivalent to an element of matchExpressions , where the key field is matchLabels.<key> , the operator is In , and the values array contains only matchLabels.<value> . Only use one of either matchExpressions or matchLabels . Subject parameter examples Given the following YAML, the Deployment object named mysubject in the default namespace is selected: apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: apps/v1 kind: Deployment namespace: default name: mysubject ... Given the following YAML, any Job object with the label working=example in the default namespace is selected: apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: batch/v1 kind: Job namespace: default selector: matchLabels: working: example ... Given the following YAML, any Pod object with the label working=example or working=sample in the default namespace is selected: apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: v1 kind: Pod namespace: default selector: - matchExpression: key: working operator: In values: - example - sample ... 2.6.1.4.2. CloudEvent overrides A ceOverrides definition provides overrides that control the CloudEvent's output format and modifications sent to the sink. You can configure multiple fields for the ceOverrides definition. A ceOverrides definition supports the following fields: Field Description Required or optional extensions Specifies which attributes are added or overridden on the outbound event. Each extensions key-value pair is set independently on the event as an attribute extension. Optional Note Only valid CloudEvent attribute names are allowed as extensions. You cannot set the spec defined attributes from the extensions override configuration. For example, you can not modify the type attribute. CloudEvent Overrides example apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: ... ceOverrides: extensions: extra: this is an extra attribute additional: 42 This sets the K_CE_OVERRIDES environment variable on the subject : Example output { "extensions": { "extra": "this is an extra attribute", "additional": "42" } } 2.6.1.4.3. The include label To use a sink binding, you need to do assign the bindings.knative.dev/include: "true" label to either the resource or the namespace that the resource is included in. If the resource definition does not include the label, a cluster administrator can attach it to the namespace by running: USD oc label namespace <namespace> bindings.knative.dev/include=true 2.6.1.5. Integrating Service Mesh with a sink binding Prerequisites You have integrated Service Mesh with OpenShift Serverless. Procedure Create a Service in a namespace that is a member of the ServiceMeshMemberRoll . apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display namespace: <namespace> 1 spec: template: metadata: annotations: sidecar.istio.io/inject: "true" 2 sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: containers: - image: quay.io/openshift-knative/showcase 1 A namespace that is a member of the ServiceMeshMemberRoll . 2 Injects Service Mesh sidecars into the Knative service pods. Apply the Service resource. USD oc apply -f <filename> Create a SinkBinding resource. apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat namespace: <namespace> 1 spec: subject: apiVersion: batch/v1 kind: Job 2 selector: matchLabels: app: heartbeat-cron sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display 1 A namespace that is a member of the ServiceMeshMemberRoll . 2 In this example, any Job with the label app: heartbeat-cron is bound to the event sink. Apply the SinkBinding resource. USD oc apply -f <filename> Create a CronJob : apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron namespace: <namespace> 1 spec: # Run every minute schedule: "* * * * *" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: "true" spec: template: metadata: annotations: sidecar.istio.io/inject: "true" 2 sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: "true" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace 1 A namespace that is a member of the ServiceMeshMemberRoll . 2 Injects Service Mesh sidecars into the CronJob pods. Apply the CronJob resource. USD oc apply -f <filename> Verification To verify that the events were sent to the Knative event sink, look at the message dumper function logs. Enter the following command: USD oc get pods Enter the following command: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing/test/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { "id": 1, "label": "" } Additional resources IntegratingService Mesh with OpenShift Serverless 2.6.2. Container source Container sources create a container image that generates events and sends events to a sink. You can use a container source to create a custom event source, by creating a container image and a ContainerSource object that uses your image URI. 2.6.2.1. Guidelines for creating a container image Two environment variables are injected by the container source controller: K_SINK and K_CE_OVERRIDES . These variables are resolved from the sink and ceOverrides spec, respectively. Events are sent to the sink URI specified in the K_SINK environment variable. The message must be sent as a POST using the CloudEvent HTTP format. Example container images The following is an example of a heartbeats container image: package main import ( "context" "encoding/json" "flag" "fmt" "log" "os" "strconv" "time" duckv1 "knative.dev/pkg/apis/duck/v1" cloudevents "github.com/cloudevents/sdk-go/v2" "github.com/kelseyhightower/envconfig" ) type Heartbeat struct { Sequence int `json:"id"` Label string `json:"label"` } var ( eventSource string eventType string sink string label string periodStr string ) func init() { flag.StringVar(&eventSource, "eventSource", "", "the event-source (CloudEvents)") flag.StringVar(&eventType, "eventType", "dev.knative.eventing.samples.heartbeat", "the event-type (CloudEvents)") flag.StringVar(&sink, "sink", "", "the host url to heartbeat to") flag.StringVar(&label, "label", "", "a special label") flag.StringVar(&periodStr, "period", "5", "the number of seconds between heartbeats") } type envConfig struct { // Sink URL where to send heartbeat cloud events Sink string `envconfig:"K_SINK"` // CEOverrides are the CloudEvents overrides to be applied to the outbound event. CEOverrides string `envconfig:"K_CE_OVERRIDES"` // Name of this pod. Name string `envconfig:"POD_NAME" required:"true"` // Namespace this pod exists in. Namespace string `envconfig:"POD_NAMESPACE" required:"true"` // Whether to run continuously or exit. OneShot bool `envconfig:"ONE_SHOT" default:"false"` } func main() { flag.Parse() var env envConfig if err := envconfig.Process("", &env); err != nil { log.Printf("[ERROR] Failed to process env var: %s", err) os.Exit(1) } if env.Sink != "" { sink = env.Sink } var ceOverrides *duckv1.CloudEventOverrides if len(env.CEOverrides) > 0 { overrides := duckv1.CloudEventOverrides{} err := json.Unmarshal([]byte(env.CEOverrides), &overrides) if err != nil { log.Printf("[ERROR] Unparseable CloudEvents overrides %s: %v", env.CEOverrides, err) os.Exit(1) } ceOverrides = &overrides } p, err := cloudevents.NewHTTP(cloudevents.WithTarget(sink)) if err != nil { log.Fatalf("failed to create http protocol: %s", err.Error()) } c, err := cloudevents.NewClient(p, cloudevents.WithUUIDs(), cloudevents.WithTimeNow()) if err != nil { log.Fatalf("failed to create client: %s", err.Error()) } var period time.Duration if p, err := strconv.Atoi(periodStr); err != nil { period = time.Duration(5) * time.Second } else { period = time.Duration(p) * time.Second } if eventSource == "" { eventSource = fmt.Sprintf("https://knative.dev/eventing-contrib/cmd/heartbeats/#%s/%s", env.Namespace, env.Name) log.Printf("Heartbeats Source: %s", eventSource) } if len(label) > 0 && label[0] == '"' { label, _ = strconv.Unquote(label) } hb := &Heartbeat{ Sequence: 0, Label: label, } ticker := time.NewTicker(period) for { hb.Sequence++ event := cloudevents.NewEvent("1.0") event.SetType(eventType) event.SetSource(eventSource) event.SetExtension("the", 42) event.SetExtension("heart", "yes") event.SetExtension("beats", true) if ceOverrides != nil && ceOverrides.Extensions != nil { for n, v := range ceOverrides.Extensions { event.SetExtension(n, v) } } if err := event.SetData(cloudevents.ApplicationJSON, hb); err != nil { log.Printf("failed to set cloudevents data: %s", err.Error()) } log.Printf("sending cloudevent to %s", sink) if res := c.Send(context.Background(), event); !cloudevents.IsACK(res) { log.Printf("failed to send cloudevent: %v", res) } if env.OneShot { return } // Wait for tick <-ticker.C } } The following is an example of a container source that references the heartbeats container image: apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: template: spec: containers: # This corresponds to a heartbeats image URI that you have built and published - image: gcr.io/knative-releases/knative.dev/eventing/cmd/heartbeats name: heartbeats args: - --period=1 env: - name: POD_NAME value: "example-pod" - name: POD_NAMESPACE value: "event-test" sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: showcase ... 2.6.2.2. Creating and managing container sources by using the Knative CLI You can use the kn source container commands to create and manage container sources by using the Knative ( kn ) CLI. Using the Knative CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly. Create a container source USD kn source container create <container_source_name> --image <image_uri> --sink <sink> Delete a container source USD kn source container delete <container_source_name> Describe a container source USD kn source container describe <container_source_name> List existing container sources USD kn source container list List existing container sources in YAML format USD kn source container list -o yaml Update a container source This command updates the image URI for an existing container source: USD kn source container update <container_source_name> --image <image_uri> 2.6.2.3. Creating a container source by using the web console After Knative Eventing is installed on your cluster, you can create a container source by using the web console. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create an event source. Prerequisites You have logged in to the OpenShift Container Platform web console. The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure In the Developer perspective, navigate to +Add Event Source . The Event Sources page is displayed. Select Container Source and then click Create Event Source . The Create Event Source page is displayed. Configure the Container Source settings by using the Form view or YAML view : Note You can switch between the Form view and YAML view . The data is persisted when switching between the views. In the Image field, enter the URI of the image that you want to run in the container created by the container source. In the Name field, enter the name of the image. Optional: In the Arguments field, enter any arguments to be passed to the container. Optional: In the Environment variables field, add any environment variables to set in the container. In the Target section, select your event sink. This can be either a Resource or a URI : Select Resource to use a channel, broker, or service as an event sink for the event source. Select URI to specify a Uniform Resource Identifier (URI) where the events are routed to. After you have finished configuring the container source, click Create . 2.6.2.4. Container source reference You can use a container as an event source, by creating a ContainerSource object. You can configure multiple parameters when creating a ContainerSource object. ContainerSource objects support the following fields: Field Description Required or optional apiVersion Specifies the API version, for example sources.knative.dev/v1 . Required kind Identifies this resource object as a ContainerSource object. Required metadata Specifies metadata that uniquely identifies the ContainerSource object. For example, a name . Required spec Specifies the configuration information for this ContainerSource object. Required spec.sink A reference to an object that resolves to a URI to use as the sink. Required spec.template A template spec for the ContainerSource object. Required spec.ceOverrides Defines overrides to control the output format and modifications to the event sent to the sink. Optional Template parameter example apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: template: spec: containers: - image: quay.io/openshift-knative/heartbeats:latest name: heartbeats args: - --period=1 env: - name: POD_NAME value: "mypod" - name: POD_NAMESPACE value: "event-test" ... 2.6.2.4.1. CloudEvent overrides A ceOverrides definition provides overrides that control the CloudEvent's output format and modifications sent to the sink. You can configure multiple fields for the ceOverrides definition. A ceOverrides definition supports the following fields: Field Description Required or optional extensions Specifies which attributes are added or overridden on the outbound event. Each extensions key-value pair is set independently on the event as an attribute extension. Optional Note Only valid CloudEvent attribute names are allowed as extensions. You cannot set the spec defined attributes from the extensions override configuration. For example, you can not modify the type attribute. CloudEvent Overrides example apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: ... ceOverrides: extensions: extra: this is an extra attribute additional: 42 This sets the K_CE_OVERRIDES environment variable on the subject : Example output { "extensions": { "extra": "this is an extra attribute", "additional": "42" } } 2.6.2.5. Integrating Service Mesh with ContainerSource Prerequisites You have integrated Service Mesh with OpenShift Serverless. Procedure Create a Service in a namespace that is a member of the ServiceMeshMemberRoll . apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display namespace: <namespace> 1 spec: template: metadata: annotations: sidecar.istio.io/inject: "true" 2 sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: containers: - image: quay.io/openshift-knative/showcase 1 A namespace that is a member of the ServiceMeshMemberRoll . 2 Injects Service Mesh sidecars into the Knative service pods. Apply the Service resource. USD oc apply -f <filename> Create a ContainerSource object in a namespace that is a member of the ServiceMeshMemberRoll and sink set to the event-display . apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats namespace: <namespace> 1 spec: template: metadata: 2 annotations: sidecar.istio.io/inject: "true" sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: containers: - image: quay.io/openshift-knative/heartbeats:latest name: heartbeats args: - --period=1s env: - name: POD_NAME value: "example-pod" - name: POD_NAMESPACE value: "event-test" sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display 1 A namespace is part of the ServiceMeshMemberRoll . 2 Enables Service Mesh integration with a ContainerSource object. Apply the ContainerSource resource. USD oc apply -f <filename> Verification To verify that the events were sent to the Knative event sink, look at the message dumper function logs. Enter the following command: USD oc get pods Enter the following command: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing/test/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { "id": 1, "label": "" } Additional resources IntegratingService Mesh with OpenShift Serverless 2.7. Connecting an event source to an event sink by using the Developer perspective When you create an event source by using the OpenShift Container Platform web console, you can specify a target event sink that events are sent to from that source. The event sink can be any addressable or callable resource that can receive incoming events from other resources. 2.7.1. Connect an event source to an event sink by using the Developer perspective Prerequisites The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Developer perspective. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created an event sink, such as a Knative service, channel or broker. Procedure Create an event source of any type, by navigating to +Add Event Source and selecting the event source type that you want to create. In the Target section of the Create Event Source form view, select your event sink. This can be either a Resource or a URI : Select Resource to use a channel, broker, or service as an event sink for the event source. Select URI to specify a Uniform Resource Identifier (URI) where the events are routed to. Click Create . Verification You can verify that the event source was created and is connected to the sink by viewing the Topology page. In the Developer perspective, navigate to Topology . View the event source and click the connected event sink to see the sink details in the right panel.
[ "apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - \"\" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4", "oc apply -f <filename>", "apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - \"\" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4", "oc apply -f <filename>", "kn source apiserver create <event_source_name> --sink broker:<broker_name> --resource \"event:v1\" --service-account <service_account_name> --mode Resource", "kn service create event-display --image quay.io/openshift-knative/showcase", "kn trigger create <trigger_name> --sink ksvc:event-display", "oc create deployment event-origin --image quay.io/openshift-knative/showcase", "kn source apiserver describe <source_name>", "Name: mysource Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 3m ServiceAccountName: events-sa Mode: Resource Sink: Name: default Namespace: default Kind: Broker (eventing.knative.dev/v1) Resources: Kind: event (v1) Controller: false Conditions: OK TYPE AGE REASON ++ Ready 3m ++ Deployed 3m ++ SinkProvided 3m ++ SufficientPermissions 3m ++ EventTypesProvided 3m", "kn service describe event-display -o url", "oc logs USD(oc get pod -o name | grep event-display) -c user-container", "☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json Data, { \"apiVersion\": \"v1\", \"involvedObject\": { \"apiVersion\": \"v1\", \"fieldPath\": \"spec.containers{event-origin}\", \"kind\": \"Pod\", \"name\": \"event-origin\", \"namespace\": \"default\", .. }, \"kind\": \"Event\", \"message\": \"Started container\", \"metadata\": { \"name\": \"event-origin.159d7608e3a3572c\", \"namespace\": \"default\", . }, \"reason\": \"Started\", }", "kn trigger delete <trigger_name>", "kn source apiserver delete <source_name>", "oc delete -f authentication.yaml", "kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"", "apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - \"\" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4", "oc apply -f <filename>", "apiVersion: sources.knative.dev/v1alpha1 kind: ApiServerSource metadata: name: testevents spec: serviceAccountName: events-sa mode: Resource resources: - apiVersion: v1 kind: Event sink: ref: apiVersion: eventing.knative.dev/v1 kind: Broker name: default", "oc apply -f <filename>", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display namespace: default spec: template: spec: containers: - image: quay.io/openshift-knative/showcase", "oc apply -f <filename>", "apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: event-display-trigger namespace: default spec: broker: default subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display", "oc apply -f <filename>", "oc create deployment event-origin --image=quay.io/openshift-knative/showcase", "oc get apiserversource.sources.knative.dev testevents -o yaml", "apiVersion: sources.knative.dev/v1alpha1 kind: ApiServerSource metadata: annotations: creationTimestamp: \"2020-04-07T17:24:54Z\" generation: 1 name: testevents namespace: default resourceVersion: \"62868\" selfLink: /apis/sources.knative.dev/v1alpha1/namespaces/default/apiserversources/testevents2 uid: 1603d863-bb06-4d1c-b371-f580b4db99fa spec: mode: Resource resources: - apiVersion: v1 controller: false controllerSelector: apiVersion: \"\" kind: \"\" name: \"\" uid: \"\" kind: Event labelSelector: {} serviceAccountName: events-sa sink: ref: apiVersion: eventing.knative.dev/v1 kind: Broker name: default", "oc get ksvc event-display -o jsonpath='{.status.url}'", "oc logs USD(oc get pod -o name | grep event-display) -c user-container", "☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json Data, { \"apiVersion\": \"v1\", \"involvedObject\": { \"apiVersion\": \"v1\", \"fieldPath\": \"spec.containers{event-origin}\", \"kind\": \"Pod\", \"name\": \"event-origin\", \"namespace\": \"default\", .. }, \"kind\": \"Event\", \"message\": \"Started container\", \"metadata\": { \"name\": \"event-origin.159d7608e3a3572c\", \"namespace\": \"default\", . }, \"reason\": \"Started\", }", "oc delete -f trigger.yaml", "oc delete -f k8s-events.yaml", "oc delete -f authentication.yaml", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/showcase", "kn service create event-display --image quay.io/openshift-knative/showcase", "kn source ping create test-ping-source --schedule \"*/2 * * * *\" --data '{\"message\": \"Hello world!\"}' --sink ksvc:event-display", "kn source ping describe test-ping-source", "Name: test-ping-source Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 15s Schedule: */2 * * * * Data: {\"message\": \"Hello world!\"} Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 8s ++ Deployed 8s ++ SinkProvided 15s ++ ValidSchedule 15s ++ EventTypeProvided 15s ++ ResourcesCorrect 15s", "watch oc get pods", "oc logs USD(oc get pod -o name | grep event-display) -c user-container", "☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 99e4f4f6-08ff-4bff-acf1-47f61ded68c9 time: 2020-04-07T16:16:00.000601161Z datacontenttype: application/json Data, { \"message\": \"Hello world!\" }", "kn delete pingsources.sources.knative.dev <ping_source_name>", "kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"", "apiVersion: sources.knative.dev/v1 kind: PingSource metadata: name: test-ping-source spec: schedule: \"*/2 * * * *\" 1 data: '{\"message\": \"Hello world!\"}' 2 sink: 3 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/showcase", "oc apply -f <filename>", "apiVersion: sources.knative.dev/v1 kind: PingSource metadata: name: test-ping-source spec: schedule: \"*/2 * * * *\" data: '{\"message\": \"Hello world!\"}' sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display", "oc apply -f <filename>", "oc get pingsource.sources.knative.dev <ping_source_name> -oyaml", "apiVersion: sources.knative.dev/v1 kind: PingSource metadata: annotations: sources.knative.dev/creator: developer sources.knative.dev/lastModifier: developer creationTimestamp: \"2020-04-07T16:11:14Z\" generation: 1 name: test-ping-source namespace: default resourceVersion: \"55257\" selfLink: /apis/sources.knative.dev/v1/namespaces/default/pingsources/test-ping-source uid: 3d80d50b-f8c7-4c1b-99f7-3ec00e0a8164 spec: data: '{ value: \"hello\" }' schedule: '*/2 * * * *' sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display namespace: default", "watch oc get pods", "oc logs USD(oc get pod -o name | grep event-display) -c user-container", "☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 042ff529-240e-45ee-b40c-3a908129853e time: 2020-04-07T16:22:00.000791674Z datacontenttype: application/json Data, { \"message\": \"Hello world!\" }", "oc delete -f <filename>", "oc delete -f ping-source.yaml", "kn service create event-display --image quay.io/openshift-knative/showcase", "kn source kafka create <kafka_source_name> --servers <cluster_kafka_bootstrap>.kafka.svc:9092 --topics <topic_name> --consumergroup my-consumer-group --sink event-display", "kn source kafka describe <kafka_source_name>", "Name: example-kafka-source Namespace: kafka Age: 1h BootstrapServers: example-cluster-kafka-bootstrap.kafka.svc:9092 Topics: example-topic ConsumerGroup: example-consumer-group Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 1h ++ Deployed 1h ++ SinkProvided 1h", "oc -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:latest-kafka-2.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list <cluster_kafka_bootstrap>:9092 --topic my-topic", "oc logs USD(oc get pod -o name | grep event-display) -c user-container", "☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.kafka.event source: /apis/v1/namespaces/default/kafkasources/example-kafka-source#example-topic subject: partition:46#0 id: partition:46/offset:0 time: 2021-03-10T11:21:49.4Z Extensions, traceparent: 00-161ff3815727d8755848ec01c866d1cd-7ff3916c44334678-00 Data, Hello!", "kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"", "apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: <source_name> spec: consumerGroup: <group_name> 1 bootstrapServers: - <list_of_bootstrap_servers> topics: - <list_of_topics> 2 sink: - <list_of_sinks> 3", "apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: kafka-source spec: consumerGroup: knative-group bootstrapServers: - my-cluster-kafka-bootstrap.kafka:9092 topics: - knative-demo-topic sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display", "oc apply -f <filename>", "oc get pods", "NAME READY STATUS RESTARTS AGE kafkasource-kafka-source-5ca0248f-... 1/1 Running 0 13m", "oc create secret -n <namespace> generic <kafka_auth_secret> --from-file=ca.crt=caroot.pem --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" \\ 1 --from-literal=user=\"my-sasl-user\"", "apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: example-source spec: net: sasl: enable: true user: secretKeyRef: name: <kafka_auth_secret> key: user password: secretKeyRef: name: <kafka_auth_secret> key: password type: secretKeyRef: name: <kafka_auth_secret> key: saslType tls: enable: true caCert: 1 secretKeyRef: name: <kafka_auth_secret> key: ca.crt", "apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: config: kafka-features: controller-autoscaler-keda: enabled", "oc apply -f <filename>", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/showcase", "oc apply -f <filename>", "apiVersion: sources.knative.dev/v1alpha1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: batch/v1 kind: Job 1 selector: matchLabels: app: heartbeat-cron sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display", "oc apply -f <filename>", "apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: \"* * * * *\" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\" spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: \"true\" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\"", "oc apply -f <filename>", "oc get sinkbindings.sources.knative.dev bind-heartbeat -oyaml", "spec: sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display namespace: default subject: apiVersion: batch/v1 kind: Job namespace: default selector: matchLabels: app: heartbeat-cron", "oc get pods", "oc logs USD(oc get pod -o name | grep event-display) -c user-container", "☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { \"id\": 1, \"label\": \"\" }", "kn service create event-display --image quay.io/openshift-knative/showcase", "kn source binding create bind-heartbeat --subject Job:batch/v1:app=heartbeat-cron --sink ksvc:event-display", "apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: \"* * * * *\" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\" spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: \"true\" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\"", "oc apply -f <filename>", "kn source binding describe bind-heartbeat", "Name: bind-heartbeat Namespace: demo-2 Annotations: sources.knative.dev/creator=minikube-user, sources.knative.dev/lastModifier=minikub Age: 2m Subject: Resource: job (batch/v1) Selector: app: heartbeat-cron Sink: Name: event-display Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 2m", "oc get pods", "oc logs USD(oc get pod -o name | grep event-display) -c user-container", "☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { \"id\": 1, \"label\": \"\" }", "kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/showcase", "apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: \"*/1 * * * *\" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: true 1 spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats args: - --period=1 env: - name: ONE_SHOT value: \"true\" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: apps/v1 kind: Deployment namespace: default name: mysubject", "apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: batch/v1 kind: Job namespace: default selector: matchLabels: working: example", "apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: v1 kind: Pod namespace: default selector: - matchExpression: key: working operator: In values: - example - sample", "apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: ceOverrides: extensions: extra: this is an extra attribute additional: 42", "{ \"extensions\": { \"extra\": \"this is an extra attribute\", \"additional\": \"42\" } }", "oc label namespace <namespace> bindings.knative.dev/include=true", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display namespace: <namespace> 1 spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 2 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" spec: containers: - image: quay.io/openshift-knative/showcase", "oc apply -f <filename>", "apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat namespace: <namespace> 1 spec: subject: apiVersion: batch/v1 kind: Job 2 selector: matchLabels: app: heartbeat-cron sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display", "oc apply -f <filename>", "apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron namespace: <namespace> 1 spec: # Run every minute schedule: \"* * * * *\" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\" spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 2 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: \"true\" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "oc apply -f <filename>", "oc get pods", "oc logs USD(oc get pod -o name | grep event-display) -c user-container", "☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing/test/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { \"id\": 1, \"label\": \"\" }", "package main import ( \"context\" \"encoding/json\" \"flag\" \"fmt\" \"log\" \"os\" \"strconv\" \"time\" duckv1 \"knative.dev/pkg/apis/duck/v1\" cloudevents \"github.com/cloudevents/sdk-go/v2\" \"github.com/kelseyhightower/envconfig\" ) type Heartbeat struct { Sequence int `json:\"id\"` Label string `json:\"label\"` } var ( eventSource string eventType string sink string label string periodStr string ) func init() { flag.StringVar(&eventSource, \"eventSource\", \"\", \"the event-source (CloudEvents)\") flag.StringVar(&eventType, \"eventType\", \"dev.knative.eventing.samples.heartbeat\", \"the event-type (CloudEvents)\") flag.StringVar(&sink, \"sink\", \"\", \"the host url to heartbeat to\") flag.StringVar(&label, \"label\", \"\", \"a special label\") flag.StringVar(&periodStr, \"period\", \"5\", \"the number of seconds between heartbeats\") } type envConfig struct { // Sink URL where to send heartbeat cloud events Sink string `envconfig:\"K_SINK\"` // CEOverrides are the CloudEvents overrides to be applied to the outbound event. CEOverrides string `envconfig:\"K_CE_OVERRIDES\"` // Name of this pod. Name string `envconfig:\"POD_NAME\" required:\"true\"` // Namespace this pod exists in. Namespace string `envconfig:\"POD_NAMESPACE\" required:\"true\"` // Whether to run continuously or exit. OneShot bool `envconfig:\"ONE_SHOT\" default:\"false\"` } func main() { flag.Parse() var env envConfig if err := envconfig.Process(\"\", &env); err != nil { log.Printf(\"[ERROR] Failed to process env var: %s\", err) os.Exit(1) } if env.Sink != \"\" { sink = env.Sink } var ceOverrides *duckv1.CloudEventOverrides if len(env.CEOverrides) > 0 { overrides := duckv1.CloudEventOverrides{} err := json.Unmarshal([]byte(env.CEOverrides), &overrides) if err != nil { log.Printf(\"[ERROR] Unparseable CloudEvents overrides %s: %v\", env.CEOverrides, err) os.Exit(1) } ceOverrides = &overrides } p, err := cloudevents.NewHTTP(cloudevents.WithTarget(sink)) if err != nil { log.Fatalf(\"failed to create http protocol: %s\", err.Error()) } c, err := cloudevents.NewClient(p, cloudevents.WithUUIDs(), cloudevents.WithTimeNow()) if err != nil { log.Fatalf(\"failed to create client: %s\", err.Error()) } var period time.Duration if p, err := strconv.Atoi(periodStr); err != nil { period = time.Duration(5) * time.Second } else { period = time.Duration(p) * time.Second } if eventSource == \"\" { eventSource = fmt.Sprintf(\"https://knative.dev/eventing-contrib/cmd/heartbeats/#%s/%s\", env.Namespace, env.Name) log.Printf(\"Heartbeats Source: %s\", eventSource) } if len(label) > 0 && label[0] == '\"' { label, _ = strconv.Unquote(label) } hb := &Heartbeat{ Sequence: 0, Label: label, } ticker := time.NewTicker(period) for { hb.Sequence++ event := cloudevents.NewEvent(\"1.0\") event.SetType(eventType) event.SetSource(eventSource) event.SetExtension(\"the\", 42) event.SetExtension(\"heart\", \"yes\") event.SetExtension(\"beats\", true) if ceOverrides != nil && ceOverrides.Extensions != nil { for n, v := range ceOverrides.Extensions { event.SetExtension(n, v) } } if err := event.SetData(cloudevents.ApplicationJSON, hb); err != nil { log.Printf(\"failed to set cloudevents data: %s\", err.Error()) } log.Printf(\"sending cloudevent to %s\", sink) if res := c.Send(context.Background(), event); !cloudevents.IsACK(res) { log.Printf(\"failed to send cloudevent: %v\", res) } if env.OneShot { return } // Wait for next tick <-ticker.C } }", "apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: template: spec: containers: # This corresponds to a heartbeats image URI that you have built and published - image: gcr.io/knative-releases/knative.dev/eventing/cmd/heartbeats name: heartbeats args: - --period=1 env: - name: POD_NAME value: \"example-pod\" - name: POD_NAMESPACE value: \"event-test\" sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: showcase", "kn source container create <container_source_name> --image <image_uri> --sink <sink>", "kn source container delete <container_source_name>", "kn source container describe <container_source_name>", "kn source container list", "kn source container list -o yaml", "kn source container update <container_source_name> --image <image_uri>", "apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: template: spec: containers: - image: quay.io/openshift-knative/heartbeats:latest name: heartbeats args: - --period=1 env: - name: POD_NAME value: \"mypod\" - name: POD_NAMESPACE value: \"event-test\"", "apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: ceOverrides: extensions: extra: this is an extra attribute additional: 42", "{ \"extensions\": { \"extra\": \"this is an extra attribute\", \"additional\": \"42\" } }", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display namespace: <namespace> 1 spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 2 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" spec: containers: - image: quay.io/openshift-knative/showcase", "oc apply -f <filename>", "apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats namespace: <namespace> 1 spec: template: metadata: 2 annotations: sidecar.istio.io/inject: \"true\" sidecar.istio.io/rewriteAppHTTPProbers: \"true\" spec: containers: - image: quay.io/openshift-knative/heartbeats:latest name: heartbeats args: - --period=1s env: - name: POD_NAME value: \"example-pod\" - name: POD_NAMESPACE value: \"event-test\" sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display", "oc apply -f <filename>", "oc get pods", "oc logs USD(oc get pod -o name | grep event-display) -c user-container", "☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing/test/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { \"id\": 1, \"label\": \"\" }" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/eventing/event-sources
Chapter 13. Installing on a single node
Chapter 13. Installing on a single node 13.1. Preparing to install on a single node 13.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You have read the documentation on selecting a cluster installation method and preparing it for users . 13.1.2. About OpenShift on a single node You can create a single-node cluster with standard installation methods. OpenShift Container Platform on a single node is a specialized installation that requires the creation of a special ignition configuration ISO. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability. Important The use of OpenShiftSDN with single-node OpenShift is not supported. OVN-Kubernetes is the default networking solution for single-node OpenShift deployments. 13.1.3. Requirements for installing OpenShift on a single node Installing OpenShift Container Platform on a single node alleviates some of the requirements for high availability and large scale clusters. However, you must address the following requirements: Administration host: You must have a computer to prepare the ISO, to create the USB boot drive, and to monitor the installation. Supported platforms: Installing OpenShift Container Platform on a single node is supported on bare metal and Certified third-party hypervisors . In all cases, you must specify the platform.none: {} parameter in the install-config.yaml configuration file. Production-grade server: Installing OpenShift Container Platform on a single node requires a server with sufficient resources to run OpenShift Container Platform services and a production workload. Table 13.1. Minimum resource requirements Profile vCPU Memory Storage Minimum 8 vCPU cores 16GB of RAM 120GB Note One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs The server must have a Baseboard Management Controller (BMC) when booting with virtual media. Networking: The server must have access to the internet or access to a local registry if it is not connected to a routable network. The server must have a DHCP reservation or a static IP address for the Kubernetes API, ingress route, and cluster node domain names. You must configure the DNS to resolve the IP address to each of the following fully qualified domain names (FQDN): Table 13.2. Required DNS records Usage FQDN Description Kubernetes API api.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record. This record must be resolvable by clients external to the cluster. Internal API api-int.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster. Ingress route *.apps.<cluster_name>.<base_domain> Add a wildcard DNS A/AAAA or CNAME record that targets the node. This record must be resolvable by clients external to the cluster. Without persistent IP addresses, communications between the apiserver and etcd might fail. 13.2. Installing OpenShift on a single node You can install single-node OpenShift using the web-based Assisted Installer and a discovery ISO that you generate using the Assisted Installer. You can also install single-node OpenShift by using coreos-installer to generate the installation ISO. 13.2.1. Installing single-node OpenShift using the Assisted Installer To install OpenShift Container Platform on a single node, use the web-based Assisted Installer wizard to guide you through the process and manage the installation. 13.2.1.1. Generating the discovery ISO with the Assisted Installer Installing OpenShift Container Platform on a single node requires a discovery ISO, which the Assisted Installer can generate. Procedure On the administration host, open a browser and navigate to Red Hat OpenShift Cluster Manager . Click Create Cluster to create a new cluster. In the Cluster name field, enter a name for the cluster. In the Base domain field, enter a base domain. For example: All DNS records must be subdomains of this base domain and include the cluster name, for example: Note You cannot change the base domain or cluster name after cluster installation. Select Install single node OpenShift (SNO) and complete the rest of the wizard steps. Download the discovery ISO. Make a note of the discovery ISO URL for installing with virtual media. Note If you enable OpenShift Virtualization during this process, you must have a second local storage device of at least 50GiB for your virtual machines. Additional resources What you can do with OpenShift Virtualization 13.2.1.2. Installing single-node OpenShift with the Assisted Installer Use the Assisted Installer to install the single-node cluster. Procedure Attach the RHCOS discovery ISO to the target host. Configure the boot drive order in the server BIOS settings to boot from the attached discovery ISO and then reboot the server. On the administration host, return to the browser. Wait for the host to appear in the list of discovered hosts. If necessary, reload the Assisted Clusters page and select the cluster name. Complete the install wizard steps. Add networking details, including a subnet from the available subnets. Add the SSH public key if necessary. Monitor the installation's progress. Watch the cluster events. After the installation process finishes writing the operating system image to the server's hard disk, the server restarts. Remove the discovery ISO, and reset the server to boot from the installation drive. The server restarts several times automatically, deploying the control plane. Additional resources Creating a bootable ISO image on a USB drive Booting from an HTTP-hosted ISO image using the Redfish API Adding worker nodes to single-node OpenShift clusters 13.2.2. Installing single-node OpenShift manually To install OpenShift Container Platform on a single node, first generate the installation ISO, and then boot the server from the ISO. You can monitor the installation using the openshift-install installation program. 13.2.2.1. Generating the installation ISO with coreos-installer Installing OpenShift Container Platform on a single node requires an installation ISO, which you can generate with the following procedure. Prerequisites Install podman . Procedure Set the OpenShift Container Platform version: USD OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version, for example, latest-4.11 Set the host architecture: USD ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture, for example, aarch64 or x86_64 . Download the OpenShift Container Platform client ( oc ) and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz USD tar zxf oc.tar.gz USD chmod +x oc Download the OpenShift Container Platform installer and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Retrieve the RHCOS ISO URL by running the following command: USD ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\" -f4) Download the RHCOS ISO: USD curl -L USDISO_URL -o rhcos-live.iso Prepare the install-config.yaml file: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the networking details. OVN-Kubernetes is the only allowed networking type for single-node clusters. 6 Set the cidr value to match the subnet of the single-node OpenShift cluster. 7 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . 8 Copy the pull secret from the Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 9 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Generate OpenShift Container Platform assets by running the following commands: USD mkdir ocp USD cp install-config.yaml ocp USD ./openshift-install --dir=ocp create single-node-ignition-config Embed the ignition data into the RHCOS ISO by running the following commands: USD alias coreos-installer='podman run --privileged --pull always --rm \ -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data \ -w /data quay.io/coreos/coreos-installer:release' USD coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso 13.2.2.2. Monitoring the cluster installation using openshift-install Use openshift-install to monitor the progress of the single-node cluster installation. Procedure Attach the modified RHCOS installation ISO to the target host. Configure the boot drive order in the server BIOS settings to boot from the attached discovery ISO and then reboot the server. On the administration host, monitor the installation by running the following command: USD ./openshift-install --dir=ocp wait-for install-complete The server restarts several times while deploying the control plane. Verification After the installation is complete, check the environment by running the following command: USD export KUBECONFIG=ocp/auth/kubeconfig USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane.example.com Ready master,worker 10m v1.24.0+beaaed6 Additional resources Creating a bootable ISO image on a USB drive Booting from an HTTP-hosted ISO image using the Redfish API Adding worker nodes to single-node OpenShift clusters 13.2.3. Creating a bootable ISO image on a USB drive You can install software using a bootable USB drive that contains an ISO image. Booting the server with the USB drive prepares the server for the software installation. Procedure On the administration host, insert a USB drive into a USB port. Create a bootable USB drive, for example: # dd if=<path_to_iso> of=<path_to_usb> status=progress where: <path_to_iso> is the relative path to the downloaded ISO file, for example, rhcos-live.iso . <path_to_usb> is the location of the connected USB drive, for example, /dev/sdb . After the ISO is copied to the USB drive, you can use the USB drive to install software on the server. 13.2.4. Booting from an HTTP-hosted ISO image using the Redfish API You can provision hosts in your network using ISOs that you install using the Redfish Baseboard Management Controller (BMC) API. Prerequisites Download the installation Red Hat Enterprise Linux CoreOS (RHCOS) ISO. Procedure Copy the ISO file to an HTTP server accessible in your network. Boot the host from the hosted ISO file, for example: Call the redfish API to set the hosted ISO as the VirtualMedia boot media by running the following command: USD curl -k -u <bmc_username>:<bmc_password> -d '{"Image":"<hosted_iso_file>", "Inserted": true}' -H "Content-Type: application/json" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia Where: <bmc_username>:<bmc_password> Is the username and password for the target host BMC. <hosted_iso_file> Is the URL for the hosted installation ISO, for example: http://webserver.example.com/rhcos-live-minimal.iso . The ISO must be accessible from the target host machine. <host_bmc_address> Is the BMC IP address of the target host machine. Set the host to boot from the VirtualMedia device by running the following command: USD curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{"Boot": {"BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI", "BootSourceOverrideEnabled": "Once"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1 Reboot the host: USD curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "ForceRestart"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset Optional: If the host is powered off, you can boot it using the {"ResetType": "On"} switch. Run the following command: USD curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "On"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset
[ "example.com", "<cluster-name>.example.com", "OCP_VERSION=<ocp_version> 1", "ARCH=<architecture> 1", "curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz", "tar zxf oc.tar.gz", "chmod +x oc", "curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz", "tar zxvf openshift-install-linux.tar.gz", "chmod +x openshift-install", "ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)", "curl -L USDISO_URL -o rhcos-live.iso", "apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9", "mkdir ocp", "cp install-config.yaml ocp", "./openshift-install --dir=ocp create single-node-ignition-config", "alias coreos-installer='podman run --privileged --pull always --rm -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data -w /data quay.io/coreos/coreos-installer:release'", "coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso", "./openshift-install --dir=ocp wait-for install-complete", "export KUBECONFIG=ocp/auth/kubeconfig", "oc get nodes", "NAME STATUS ROLES AGE VERSION control-plane.example.com Ready master,worker 10m v1.24.0+beaaed6", "dd if=<path_to_iso> of=<path_to_usb> status=progress", "curl -k -u <bmc_username>:<bmc_password> -d '{\"Image\":\"<hosted_iso_file>\", \"Inserted\": true}' -H \"Content-Type: application/json\" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia", "curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\", \"BootSourceOverrideEnabled\": \"Once\"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1", "curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"ForceRestart\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset", "curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"On\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/installing/installing-on-a-single-node
8.99. libvirt-cim
8.99. libvirt-cim 8.99.1. RHBA-2013:1676 - libvirt-cim bug fix update Updated libvirt-cim packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The libvirt-cim packages contain a Common Information Model (CIM) provider based on Common Manageability Programming Interface (CMPI). It supports most libvirt virtualization features and allows management of multiple libvirt-based platforms. Bug Fixes BZ# 826179 Previously, running the wbemcli utility with the KVM_ComputerSystem class terminated unexpectedly with a segmentation fault. This was because even when connecting to the libvirtd daemon read-only, the domain XML with secure information, that is with the VIR_DOMAIN_XML_SECURE flag, was dumped. However, this operation is forbidden in libvirt. With this update, the flag is not used with read-only connections. Running the wbemcli command with KVM_ComputerSystem now displays the domain information as expected. BZ# 833633 When updating certain libvirt-cim or sblim-smis-hba packages, the following error could have been logged in the /var/log/messages file: sfcbmof: *** Repository error for /var/lib/sfcb/registration/repository//root/pg_interop/qualifiers This problem occurred because libvirt-cim installed the PG_InterOp class incorrectly in the sblim-sfcb repository, however, this class is specific for the open-pegasus package. With this update, PG_InterOp is unregistered before upgrading the package, and no error message is logged in this scenario. BZ# 859122 Previously, libvirt-cim incorrectly installed providers specific for the open-pegasus package in the sblim-sfcb repository. This could have caused various problems, for example, failures when compiling the MOF files. Providers specific for open-pegasus are now installed in the correct repository and the problems no longer occur. BZ# 908083 Previously, if a qemu domain was defined with a bridge network interface, running the libvirt-cim provider failed with the following error message: Unable to start domain: unsupported configuration: scripts are are not supported on interfaces of type bridge This was because code triggering a script was added in a file used to create the domain prior to checking the qemu domain type. However, scripts are not allowed for qemu domains. With this update, a check for the qemu domain type is performed prior to adding the code triggering the script. As a result, when using libvirt-cim, it is now possible to create qemu domains with the bridge network interface. BZ# 913164 Previously, a call to query a guest's current VNC address and port number returned the static configuration of the guest. If the guest was used to enable the "autoport" selection, the call did not return the allocated port. The libvirt-cim code has been modified to only return static configuration information. This allows other interfaces to return information based on the domain state. As a result, the current and correct port being used by the domain for VNC is now returned. BZ# 1000937 Virtual machines managed by a libvirt-cim broker were not aware of the "dumpCore" flag in the "memory" section nor was there support for the "shareable" property for "disk" devices. Thus, those properties were dropped from the virtual machine XML configuration when the configuration was updated by the broker. As a consequence, customers expecting or setting these properties on their virtual machines had to adjust the configurations in order to reset them. With this update, a patch has been added to libvirt-cim and it is now aware of these properties so that no changes made to the virtual machine XML configuration will be lost by the broker when it writes the configuration. As a result, virtual machines managed by the libvirt-cim broker will recognize the "dumpCore" tag in the "memory" section or the "shareable" tag on a "disk" device and not remove either when updating the virtual machine XML configuration. Users of libvirt-cim are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/libvirt-cim
Chapter 11. Changing the cloud provider credentials configuration
Chapter 11. Changing the cloud provider credentials configuration For supported configurations, you can change how OpenShift Container Platform authenticates with your cloud provider. To determine which cloud credentials strategy your cluster uses, see Determining the Cloud Credential Operator mode . 11.1. Rotating or removing cloud provider credentials After installing OpenShift Container Platform, some organizations require the rotation or removal of the cloud provider credentials that were used during the initial installation. To allow the cluster to use the new credentials, you must update the secrets that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. 11.1.1. Rotating cloud provider credentials with the Cloud Credential Operator utility The Cloud Credential Operator (CCO) utility ccoctl supports updating secrets for clusters installed on IBM Cloud(R). 11.1.1.1. Rotating API keys You can rotate API keys for your existing service IDs and update the corresponding secrets. Prerequisites You have configured the ccoctl binary. You have existing service IDs in a live OpenShift Container Platform cluster installed. Procedure Use the ccoctl utility to rotate your API keys for the service IDs and update the secrets: USD ccoctl <provider_name> refresh-keys \ 1 --kubeconfig <openshift_kubeconfig_file> \ 2 --credentials-requests-dir <path_to_credential_requests_directory> \ 3 --name <name> 4 1 The name of the provider. For example: ibmcloud or powervs . 2 The kubeconfig file associated with the cluster. For example, <installation_directory>/auth/kubeconfig . 3 The directory where the credential requests are stored. 4 The name of the OpenShift Container Platform cluster. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. 11.1.2. Rotating cloud provider credentials manually If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported. For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), and VMware vSphere are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds Azure azure-credentials GCP gcp-credentials RHOSP openstack-credentials VMware vSphere vsphere-creds Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials. Note If the vSphere CSI Driver Operator is enabled, this step is not required. To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a user with the cluster-admin role and run the following command: USD oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date )"'"}}' \ --type=merge While the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports Progressing=true . To view the status, run the following command: USD oc get co kube-controller-manager If the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual CredentialsRequest objects. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Get the names and namespaces of all referenced component secrets: USD oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef' where <provider_spec> is the corresponding value for your cloud provider: AWS: AWSProviderSpec GCP: GCPProviderSpec Partial example output for AWS { "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" } Delete each of the referenced component secrets: USD oc delete secret <secret_name> \ 1 -n <secret_namespace> 2 1 Specify the name of a secret. 2 Specify the namespace that contains the secret. Example deletion of an AWS secret USD oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones. Verification To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields have changed. Additional resources The Cloud Credential Operator in mint mode The Cloud Credential Operator in passthrough mode vSphere CSI Driver Operator 11.1.3. Removing cloud provider credentials For clusters that use the Cloud Credential Operator (CCO) in mint mode, the administrator-level credential is stored in the kube-system namespace. The CCO uses the admin credential to process the CredentialsRequest objects in the cluster and create users for components with limited permissions. After installing an OpenShift Container Platform cluster with the CCO in mint mode, you can remove the administrator-level credential secret from the kube-system namespace in the cluster. The CCO only requires the administrator-level credential during changes that require reconciling new or modified CredentialsRequest custom resources, such as minor cluster version updates. Note Before performing a minor version cluster update (for example, updating from OpenShift Container Platform 4.16 to 4.17), you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the update might be blocked. Prerequisites Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds GCP gcp-credentials Click the Options menu in the same row as the secret and select Delete Secret . Additional resources The Cloud Credential Operator in mint mode 11.2. Enabling token-based authentication After installing an Microsoft Azure OpenShift Container Platform cluster, you can enable Microsoft Entra Workload ID to use short-term credentials. 11.2.1. Configuring the Cloud Credential Operator utility To configure an existing cluster to create and manage cloud credentials from outside of the cluster, extract and prepare the Cloud Credential Operator utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image}) Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 11.2.2. Enabling Microsoft Entra Workload ID on an existing cluster If you did not configure your Microsoft Azure OpenShift Container Platform cluster to use Microsoft Entra Workload ID during installation, you can enable this authentication method on an existing cluster. Important The process to enable Workload ID on an existing cluster is disruptive and takes a significant amount of time. Before proceeding, observe the following considerations: Read the following steps and ensure that you understand and accept the time requirement. The exact time requirement varies depending on the individual cluster, but it is likely to require at least one hour. During this process, you must refresh all service accounts and restart all pods on the cluster. These actions are disruptive to workloads. To mitigate this impact, you can temporarily halt these services and then redeploy them when the cluster is ready. After starting this process, do not attempt to update the cluster until it is complete. If an update is triggered, the process to enable Workload ID on an existing cluster fails. Prerequisites You have installed an OpenShift Container Platform cluster on Microsoft Azure. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have extracted and prepared the Cloud Credential Operator utility ( ccoctl ) binary. You have access to your Azure account by using the Azure CLI ( az ). Procedure Create an output directory for the manifests that the ccoctl utility generates. This procedure uses ./output_dir as an example. Extract the service account public signing key for the cluster to the output directory by running the following command: USD oc get configmap \ --namespace openshift-kube-apiserver bound-sa-token-signing-certs \ --output 'go-template={{index .data "service-account-001.pub"}}' > ./output_dir/serviceaccount-signer.public 1 1 This procedure uses a file named serviceaccount-signer.public as an example. Use the extracted service account public signing key to create an OpenID Connect (OIDC) issuer and Azure blob storage container with OIDC configuration files by running the following command: USD ./ccoctl azure create-oidc-issuer \ --name <azure_infra_name> \ 1 --output-dir ./output_dir \ --region <azure_region> \ 2 --subscription-id <azure_subscription_id> \ 3 --tenant-id <azure_tenant_id> \ --public-key-file ./output_dir/serviceaccount-signer.public 4 1 The value of the name parameter is used to create an Azure resource group. To use an existing Azure resource group instead of creating a new one, specify the --oidc-resource-group-name argument with the existing group name as its value. 2 Specify the region of the existing cluster. 3 Specify the subscription ID of the existing cluster. 4 Specify the file that contains the service account public signing key for the cluster. Verify that the configuration file for the Azure pod identity webhook was created by running the following command: USD ll ./output_dir/manifests Example output total 8 -rw-------. 1 cloud-user cloud-user 193 May 22 02:29 azure-ad-pod-identity-webhook-config.yaml 1 -rw-------. 1 cloud-user cloud-user 165 May 22 02:29 cluster-authentication-02-config.yaml 1 The file azure-ad-pod-identity-webhook-config.yaml contains the Azure pod identity webhook configuration. Set an OIDC_ISSUER_URL variable with the OIDC issuer URL from the generated manifests in the output directory by running the following command: USD OIDC_ISSUER_URL=`awk '/serviceAccountIssuer/ { print USD2 }' ./output_dir/manifests/cluster-authentication-02-config.yaml` Update the spec.serviceAccountIssuer parameter of the cluster authentication configuration by running the following command: USD oc patch authentication cluster \ --type=merge \ -p "{\"spec\":{\"serviceAccountIssuer\":\"USD{OIDC_ISSUER_URL}\"}}" Monitor the configuration update progress by running the following command: USD oc adm wait-for-stable-cluster This process might take 15 minutes or longer. The following output indicates that the process is complete: All clusteroperators are stable Restart all of the pods in the cluster by running the following command: USD oc adm reboot-machine-config-pool mcp/worker mcp/master Restarting a pod updates the serviceAccountIssuer field and refreshes the service account public signing key. Monitor the restart and update process by running the following command: USD oc adm wait-for-node-reboot nodes --all This process might take 15 minutes or longer. The following output indicates that the process is complete: All nodes rebooted Update the Cloud Credential Operator spec.credentialsMode parameter to Manual by running the following command: USD oc patch cloudcredential cluster \ --type=merge \ --patch '{"spec":{"credentialsMode":"Manual"}}' Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --credentials-requests \ --included \ --to <path_to_directory_for_credentials_requests> \ --registry-config ~/.pull-secret Note This command might take a few moments to run. Set an AZURE_INSTALL_RG variable with the Azure resource group name by running the following command: USD AZURE_INSTALL_RG=`oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'` Use the ccoctl utility to create managed identities for all CredentialsRequest objects by running the following command: USD ccoctl azure create-managed-identities \ --name <azure_infra_name> \ --output-dir ./output_dir \ --region <azure_region> \ --subscription-id <azure_subscription_id> \ --credentials-requests-dir <path_to_directory_for_credentials_requests> \ --issuer-url "USD{OIDC_ISSUER_URL}" \ --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \ 1 --installation-resource-group-name "USD{AZURE_INSTALL_RG}" 1 Specify the name of the resource group that contains the DNS zone. Apply the Azure pod identity webhook configuration for Workload ID by running the following command: USD oc apply -f ./output_dir/manifests/azure-ad-pod-identity-webhook-config.yaml Apply the secrets generated by the ccoctl utility by running the following command: USD find ./output_dir/manifests -iname "openshift*yaml" -print0 | xargs -I {} -0 -t oc replace -f {} This process might take several minutes. Restart all of the pods in the cluster by running the following command: USD oc adm reboot-machine-config-pool mcp/worker mcp/master Restarting a pod updates the serviceAccountIssuer field and refreshes the service account public signing key. Monitor the restart and update process by running the following command: USD oc adm wait-for-node-reboot nodes --all This process might take 15 minutes or longer. The following output indicates that the process is complete: All nodes rebooted Monitor the configuration update progress by running the following command: USD oc adm wait-for-stable-cluster This process might take 15 minutes or longer. The following output indicates that the process is complete: All clusteroperators are stable Optional: Remove the Azure root credentials secret by running the following command: USD oc delete secret -n kube-system azure-credentials Additional resources Microsoft Entra Workload ID Configuring an Azure cluster to use short-term credentials 11.2.3. Verifying that a cluster uses short-term credentials You can verify that a cluster uses short-term security credentials for individual components by checking the Cloud Credential Operator (CCO) configuration and other values in the cluster. Prerequisites You deployed an OpenShift Container Platform cluster using the Cloud Credential Operator utility ( ccoctl ) to implement short-term credentials. You installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Verify that the CCO is configured to operate in manual mode by running the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output confirms that the CCO is operating in manual mode: Example output Manual Verify that the cluster does not have root credentials by running the following command: USD oc get secrets \ -n kube-system <secret_name> where <secret_name> is the name of the root secret for your cloud provider. Platform Secret name Amazon Web Services (AWS) aws-creds Microsoft Azure azure-credentials Google Cloud Platform (GCP) gcp-credentials An error confirms that the root secret is not present on the cluster. Example output for an AWS cluster Error from server (NotFound): secrets "aws-creds" not found Verify that the components are using short-term security credentials for individual components by running the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the cluster is using manual mode with short-term credentials that are created and managed from outside of the cluster. Azure clusters: Verify that the components are assuming the Azure client ID that is specified in the secret manifests by running the following command: USD oc get secrets \ -n openshift-image-registry installer-cloud-credentials \ -o jsonpath='{.data}' An output that contains the azure_client_id and azure_federated_token_file felids confirms that the components are assuming the Azure client ID. Azure clusters: Verify that the pod identity webhook is running by running the following command: USD oc get pods \ -n openshift-cloud-credential-operator Example output NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m 11.3. Additional resources About the Cloud Credential Operator
[ "ccoctl <provider_name> refresh-keys \\ 1 --kubeconfig <openshift_kubeconfig_file> \\ 2 --credentials-requests-dir <path_to_credential_requests_directory> \\ 3 --name <name> 4", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge", "oc get co kube-controller-manager", "oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'", "{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }", "oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2", "oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers", "RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image})", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "oc get configmap --namespace openshift-kube-apiserver bound-sa-token-signing-certs --output 'go-template={{index .data \"service-account-001.pub\"}}' > ./output_dir/serviceaccount-signer.public 1", "./ccoctl azure create-oidc-issuer --name <azure_infra_name> \\ 1 --output-dir ./output_dir --region <azure_region> \\ 2 --subscription-id <azure_subscription_id> \\ 3 --tenant-id <azure_tenant_id> --public-key-file ./output_dir/serviceaccount-signer.public 4", "ll ./output_dir/manifests", "total 8 -rw-------. 1 cloud-user cloud-user 193 May 22 02:29 azure-ad-pod-identity-webhook-config.yaml 1 -rw-------. 1 cloud-user cloud-user 165 May 22 02:29 cluster-authentication-02-config.yaml", "OIDC_ISSUER_URL=`awk '/serviceAccountIssuer/ { print USD2 }' ./output_dir/manifests/cluster-authentication-02-config.yaml`", "oc patch authentication cluster --type=merge -p \"{\\\"spec\\\":{\\\"serviceAccountIssuer\\\":\\\"USD{OIDC_ISSUER_URL}\\\"}}\"", "oc adm wait-for-stable-cluster", "All clusteroperators are stable", "oc adm reboot-machine-config-pool mcp/worker mcp/master", "oc adm wait-for-node-reboot nodes --all", "All nodes rebooted", "oc patch cloudcredential cluster --type=merge --patch '{\"spec\":{\"credentialsMode\":\"Manual\"}}'", "oc adm release extract --credentials-requests --included --to <path_to_directory_for_credentials_requests> --registry-config ~/.pull-secret", "AZURE_INSTALL_RG=`oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'`", "ccoctl azure create-managed-identities --name <azure_infra_name> --output-dir ./output_dir --region <azure_region> --subscription-id <azure_subscription_id> --credentials-requests-dir <path_to_directory_for_credentials_requests> --issuer-url \"USD{OIDC_ISSUER_URL}\" --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \\ 1 --installation-resource-group-name \"USD{AZURE_INSTALL_RG}\"", "oc apply -f ./output_dir/manifests/azure-ad-pod-identity-webhook-config.yaml", "find ./output_dir/manifests -iname \"openshift*yaml\" -print0 | xargs -I {} -0 -t oc replace -f {}", "oc adm reboot-machine-config-pool mcp/worker mcp/master", "oc adm wait-for-node-reboot nodes --all", "All nodes rebooted", "oc adm wait-for-stable-cluster", "All clusteroperators are stable", "oc delete secret -n kube-system azure-credentials", "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "Manual", "oc get secrets -n kube-system <secret_name>", "Error from server (NotFound): secrets \"aws-creds\" not found", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o jsonpath='{.data}'", "oc get pods -n openshift-cloud-credential-operator", "NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/postinstallation_configuration/changing-cloud-credentials-configuration
Chapter 15. Managing RHEL for Edge images
Chapter 15. Managing RHEL for Edge images To manage the RHEL for Edge images, you can perform any of the following administrative tasks: Edit the RHEL for Edge image blueprint by using image builder in RHEL web console or in the command-line Build a commit update by using image builder command-line Update the RHEL for Edge images Configure rpm-ostree remotes on nodes, to update node policy Restore RHEL for Edge images manually or automatically by using greenboot 15.1. Editing a RHEL for Edge image blueprint by using image builder You can edit the RHEL for Edge image blueprint to: Add additional components that you might require Modify the version of any existing component Remove any existing component 15.1.1. Adding a component to RHEL for Edge blueprint using image builder in RHEL web console To add a component to a RHEL for Edge image blueprint, ensure that you have met the following prerequisites and then follow the procedure to edit the corresponding blueprint. Prerequisites On a RHEL system, you have accessed the RHEL image builder dashboard. You have created a blueprint for RHEL for Edge image. Procedure On the RHEL image builder dashboard, click the blueprint that you want to edit. To search for a specific blueprint, enter the blueprint name in the filter text box, and click Enter . On the upper right side of the blueprint, click Edit Packages . The Edit blueprints wizard opens. On the Details page, update the blueprint name and click . On the Packages page, follow the steps: In the Available Packages , enter the package name that you want to add in the filter text box, and click Enter . A list with the component name appears. Click > to add the component to the blueprint. On the Review page, click Save . The blueprint is now updated with the new package. 15.1.2. Removing a component from a blueprint using RHEL image builder in the web console To remove one or more unwanted components from a blueprint that you created by using RHEL image builder, ensure that you have met the following prerequisites and then follow the procedure. Prerequisites On a RHEL system, you have accessed the RHEL image builder dashboard. You have created a blueprint for RHEL for Edge image. You have added at least one component to the RHEL for Edge blueprint. Procedure On the RHEL image builder dashboard, click the blueprint that you want to edit. To search for a specific blueprint, enter the blueprint name in the filter text box, and click Enter . On the upper right side of the blueprint, click Edit Packages . The Edit blueprints wizard opens. On the Details page, update the blueprint name and click . On the Packages page, follow the steps: From the Chosen packages , click < to remove the chosen component. You can also click << to remove all the packages at once. On the Review page, click Save . The blueprint is now updated. 15.1.3. Editing a RHEL for Edge image blueprint by using the command line You can change the specifications for your RHEL for Edge image blueprint by using the RHEL image builder command-line interface. To do so, ensure that you have met the following prerequisites and then follow the procedure to edit the corresponding blueprint. Prerequisites You have access to the RHEL image builder command-line. You have created a RHEL for Edge image blueprint. Procedure Save (export) the blueprint to a local text file: Edit the BLUEPRINT-NAME.toml file with a text editor of your choice and make your changes. Before finishing with the edits, verify that the file is a valid blueprint: Increase the version number. Ensure that you use a Semantic Versioning scheme. Note if you do not change the version, the patch component of the version is increased automatically. Check if the contents are valid TOML specifications. See the TOML documentation for more information. Note TOML documentation is a community product and is not supported by Red Hat. You can report any issues with the tool at https://github.com/toml-lang/toml/issues . Save the file and close the editor. Push (import) the blueprint back into RHEL image builder server: Note When pushing the blueprint back into the RHEL image builder server, provide the file name including the .toml extension. Verify that the contents uploaded to RHEL image builder match your edits: Check whether the components and versions listed in the blueprint and their dependencies are valid: 15.2. Updating RHEL for Edge images 15.2.1. How RHEL for Edge image updates are deployed With RHEL for Edge images, you can either deploy the updates manually or can automate the deployment process. The updates are applied in an atomic manner, where the state of each update is known, and the updates are staged and applied only upon reboot. Because no changes are seen until you reboot the device, you can schedule a reboot to ensure the highest possible uptime. During the image update, only the updated operating system content is transferred over the network. This makes the deployment process more efficient compared to transferring the entire image. The operating system binaries and libraries in /usr are read-only , and the read and write state is maintained in /var and /etc directories. When moving to a new deployment, the /etc and the /var directories are copied to the new deployment with read and write permissions. The /usr directory is copied as a soft link to the new deployment directory, with read-only permissions. The following diagram illustrates the RHEL for Edge image update deployment process: By default, the new system is booted using a procedure similar to a chroot operation, that is, the system enables control access to a filesystem while controlling the exposure to the underlying server environment. The new /sysroot directory mainly has the following parts: Repository database at the /sysroot/ostree/repo directory. File system revisions at the /sysroot/ostree/deploy/rhel/deploy directory, which are created by each operation in the system update. The /sysroot/ostree/boot directory, which links to deployments on the point. Note that /ostree is a soft link to /sysroot/ostree . The files from the /sysroot/ostree/boot directory are not duplicated. The same file is used if it is not changed during the deployment. The files are hard-links to another file stored in the /sysroot/ostree/repo/objects directory. The operating system selects the deployment in the following way: The dracut tool parses the ostree kernel argument in the initramfs root file system and sets up the /usr directory as a read-only bind mount. Bind the deployment directory in /sysroot to / directory. Re-mount the operating system already mounted dirs using the MS_MOVE mount flag If anything goes wrong, you can perform a deployment rollback by removing the old deployments with the rpm-ostree cleanup command. Each client machine contains an OSTree repository stored in /ostree/repo , and a set of deployments stored in /ostree/deploy/USDSTATEROOT/USDCHECKSUM . With the deployment updates in RHEL for Edge image, you can benefit from a better system consistency across multiple devices, easier reproducibility, and better isolation between the pre and post system states change. 15.2.2. Building a commit update You can build a commit update after making a change in the blueprint, such as: Adding an additional package that your system requires Modifying the package version of any existing component Removing any existing package. Prerequisites You have updated a system which is running RHEL image builder. You created a blueprint update. You have previously created an OSTree repository and served it through HTTP. See Setting up a web server to install RHEL for Edge images . Procedure Start the compose of the new commit image, with the following arguments: --url , --ref , blueprint-name , edge-commit . The command instructs the compose process to fetch the metadata from the OStree repo before starting the compose. The resulting new OSTree commit contains a reference of the original OSTree commit as a parent image. After the compose process finishes, fetch the .tar file. Extract the commit to a temporary directory, so that you can store the commit history in the OSTree repo. Inspect the resulting OSTree repo commit, by using the tar -xf command. It extracts the tar file to disk so you can inspect the resulting OSTree repo: In the output example, there is a single OSTree commit in the repo that references a parent commit. The parent commit is the same checksum from the original OSTree commit that you previously made. Merge the two commits by using the ostree pull-local command: This command copies any new metadata and content from the location on the disk, for example, /var/tmp , to a destination OSTree repo in /var/srv/httpd . Verification Inspect the target OSTree repo: You can see that the target OSTree repo now contains two commits in the repository, in a logical order. After successful verification, you can update your RHEL for Edge system. 15.2.3. Deploying RHEL for Edge image updates manually After you have edited a RHEL for Edge blueprint, you can update the image commit. RHEL image builder generates a new commit for the updated RHEL for Edge image. Use this new commit to deploy the image with latest package versions or with additional packages. To deploy RHEL for Edge images updates, ensure that you meet the prerequisites and then follow the procedure. Prerequisites On a RHEL system, you have accessed the RHEL image builder dashboard. You have created a RHEL for Edge image blueprint. You have edited the RHEL for Edge image blueprint. Procedure On the RHEL image builder dashboard click Create Image . On the Create Image window, perform the following steps: In the Image output page: From the Select a blueprint dropdown list, select the blueprint that you edited. From the Image output type dropdown list, select RHEL for Edge Commit (.tar) . Click . In the OSTree settings page, enter: In the Repository URL field, enter the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. See Setting up a web server to install RHEL for Edge image . In the Parent commit field, specify the parent commit ID that was previously generated. See Extracting RHEL for Edge image commit . In the Ref field, you can either specify a name for your commit or leave it empty. By default, the web console specifies the Ref as rhel/9/arch_name/edge . Click . In the Review page, check the customizations and click Create image . RHEL image builder starts to create a RHEL for Edge image for the updated blueprint. The image creation process takes a few minutes to complete. To view the RHEL for Edge image creation progress, click the blueprint name from the breadcrumbs, and then click the Images tab. The resulting image includes the latest packages that you have added, if any, and have the original commit ID as a parent. Download the resulting RHEL for Edge Commit ( .tar ) image. From the Images tab, click Download to save the RHEL for Edge Commit ( .tar ) image to your system. Extract the OSTree commit ( .tar ) file. Upgrade the OSTree repo: On the RHEL system provisioned, from the original edge image, verify the current status. If there is no new commit ID, run the following command to verify if there is any upgrade available: The command output provides the current active OSTree commit ID. Update OSTree to make the new OSTree commit ID available. OSTree verifies if there is an update on the repository. If yes, it fetches the update and requests you to reboot your system so that you can activate the deployment of this new commit update. Check the current status again: You can now see that there are 2 commits available: The active parent commit. A new commit that is not active and contains 1 added difference. To activate the new deployment and to make the new commit active, reboot your system. The Anaconda Installer reboots into the new deployment. On the login screen, you can see a new deployment available for you to boot. If you want to boot into the newest deployment (commit), the rpm-ostree upgrade command automatically orders the boot entries so that the new deployment is first in the list. Optionally, you can use the arrow key on your keyboard to select the GRUB menu entry and press Enter . Provide your login user account credentials. Verify the OSTree status: The command output provides the active commit ID. To view the changed packages, if any, run a diff between the parent commit and the new commit: The update shows that the package you have installed is available and ready for use. 15.2.4. Deploying RHEL for Edge image updates manually using the command-line After you have edited a RHEL for Edge blueprint, you can update the image commit. RHEL image builder generates a new commit for the updated RHEL for Edge image. Use the new commit to deploy the image with latest package versions or with additional packages using the CLI. To deploy RHEL for Edge image updates using the CLI, ensure that you meet the prerequisites, and then follow the procedure. Prerequisites You created the RHEL for Edge image blueprint. You edited the RHEL for Edge image blueprint. See Editing a RHEL for Edge image blueprint by using the command line . Procedure Create the RHEL for Edge Commit ( .tar ) image with the following arguments: where ref is the reference you provided during the creation of the RHEL for Edge Container commit. For example, rhel/9/x86_64/edge . URL-OSTree-repository is the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. See Setting up a web server to install RHEL for Edge image . image-type is edge-commit . RHEL image builder creates a RHEL for Edge image for the updated blueprint. Check the RHEL for Edge image creation progress: Note The image creation processes can take up to ten to thirty minutes to complete. The resulting image includes the latest packages that you have added, if any, and has the original commit ID as a parent. Download the resulting RHEL for Edge image. For more information, see Downloading a RHEL for Edge image using the RHEL image builder command-line interface . Extract the OSTree commit. Serve the OSTree commit by using httpd. See Setting up a web server to install RHEL for Edge image . Upgrade the OSTree repo: On the RHEL system provisioned from the original edge image, verify the current status: If there is no new commit ID, run the following command to verify if there is any upgrade available: The command output provides the current active OSTree commit ID. Update OSTree to make the new OSTree commit ID available: OSTree verifies if there is an update on the repository. If yes, it fetches the update and requests you to reboot your system so that you can activate the deployment of the new commit update. Check the current status again: You should now see that there are 2 commits available: The active parent commit A new commit that is not active and contains 1 added difference To activate the new deployment and make the new commit active, reboot your system: The Anaconda Installer reboots into the new deployment. On the login screen, you can see a new deployment available for you to boot. If you want to boot into the newest deployment, the rpm-ostree upgrade command automatically orders the boot entries so that the new deployment is first in the list. Optionally, you can use the arrow key on your keyboard to select the GRUB menu entry and press Enter . Log in using your account credentials. Verify the OSTree status: The command output provides the active commit ID. To view the changed packages, if any, run a diff between the parent commit and the new commit: The update shows that the package you have installed is available and ready for use. 15.2.5. Deploying RHEL for Edge image updates manually for non-network-base deployments After editing a RHEL for Edge blueprint, you can update your RHEL for Edge Commit image with those updates. Use RHEL image builder to generate a new commit to update your RHEL for Edge image that is already deployed in a VM, for example. Use this new commit to deploy the image with latest package versions or with additional packages. To deploy RHEL for Edge images updates, ensure that you meet the prerequisites and then follow the procedure. Prerequisites On your host, you have opened the RHEL image builder app from the web console in a browser. You have a RHEL for Edge system provisioned that is up and running. You have an OSTree repository that is being served over HTTP. You have edited a previously created RHEL for Edge image blueprint. Procedure On your system host, on the RHEL image builder dashboard, click Create Image . On the Create Image window, perform the following steps: In the Image output page: From the Select a blueprint dropdown list, select the blueprint that you edited. From the Image output type dropdown list, select RHEL for Edge Container (.tar) . Click . In the OSTree settings page, enter: In the Repository URL field, enter the URL to the OSTree repository of the commit to embed in the image. For example, http://10.0.2.2:8080/repo/. See Setting up a web server to install RHEL for Edge image . In the Parent commit field, specify the parent commit ID that was previously generated. See Extracting RHEL for Edge image commit . In the Ref field, you can either specify a name for your commit or leave it empty. By default, the web console specifies the Ref as rhel/9/arch_name/edge . Click . In the Review page, check the customizations and click Create . RHEL image builder creates a RHEL for Edge image for the updated blueprint. Click the Images tab to view the progress of RHEL for Edge image creation. Note The image creation process takes a few minutes to complete. The resulting image includes the latest packages that you have added, if any, and has the original commit ID as a parent. Download the resulting RHEL for Edge image on your host: From the Images tab, click Download to save the RHEL for Edge Container ( .tar ) image to your host system. On the RHEL system provisioned from the original edge image, perform the following steps: Load the RHEL for Edge Container image into Podman, serving the child commit ID this time. Run Podman . Upgrade the OSTree repo: On the RHEL system provisioned, from the original edge image, verify the current status. If there is no new commit ID, run the following command to verify if there is any upgrade available: If there are updates available, the command output provides information about the available updates in the OSTree repository, such as the current active OSTree commit ID. Else, it prompts a message informing that there are no updates available. Update OSTree to make the new OSTree commit ID available. OSTree verifies if there is an update on the repository. If yes, it fetches the update and requests you to reboot your system so that you can activate the deployment of this new commit update. Check the current system status: You can now see that there are 2 commits available: The active parent commit. A new commit that is not active and contains 1 added difference. To activate the new deployment and to make the new commit active, reboot your system. The Anaconda Installer reboots into the new deployment. On the login screen, you can see a new deployment available for you to boot. To boot into the newest commit, run the following command to automatically order the boot entries so that the new deployment is first in the list: Optionally, you can use the arrow key on your keyboard to select the GRUB menu entry and press Enter . Provide your login user account credentials. Verify the OSTree status: The command output provides the active commit ID. To view the changed packages, if any, run a diff between the parent commit and the new commit: The update shows that the package you have installed is available and ready for use. 15.3. Upgrading RHEL for Edge systems 15.3.1. Upgrading your RHEL 8 system to RHEL 9 You can upgrade your RHEL 8 system to RHEL 9 by using the rpm-ostree rebase command. The command fully supports the default package set of RHEL for Edge upgrades from the most recent updates of RHEL 8 to the most recent updates of RHEL 9. The upgrade downloads and installs the RHEL 9 image in the background. After the upgrade finishes, you must reboot your system to use the new RHEL 9 image. Warning The upgrade does not support every possible rpm package version and inclusions. You must test your package additions to ensure that these packages works as expected. Prerequisites You have a running RHEL for Edge 8 system You have an OSTree repository server by HTTP You created a blueprint for RHEL for Edge 9 image that you will upgrade Procedure On the system where RHEL image builder runs, create a RHEL for Edge 9 image: Start the image compose: Optionally, you can also create the new RHEL for Edge 9 image by using a pre-existing OSTree repository, by using the following command: After the compose finishes, download the image. Extract the downloaded image to /var/www/html/ folder: Start the httpd service: On the RHEL for Edge device, check the current remote repository configuration: Note Depending on how your Kickstart file is configured, the /etc/ostree/remotes.d repository can be empty. If you configured your remote repository, you can see its configuration. For the edge-installer , raw-image , and simplified-installer images, the remote is configured by default. Check the current URL repository: edge is the of the Ostree repository. List the remote reference branches: You can see the following output: To add the new repository: Configure the URL key to add a remote repository. For example: Configure the URL key to point to the RHEL 9 commit for the upgrade. For example: Confirm if the URL has been set to the new remote repository: See the new URL repository: List the current remote list options: Rebase your system to the RHEL version, providing the reference path for the RHEL 9 version: Reboot your system. Enter your username and password. Check the current system status: Verification Check the current status of the currently running deployment: Optional: List the processor and tasks managed by the kernel in real-time. If the upgrade does not support your requirements, you have the option to manually rollback to the stable deployment RHEL 8 version: Reboot your system. Enter your username and password: After rebooting, your system runs RHEL 9 successfully. Note If your upgrade succeeds and you do not want to use the deployment RHEL 8 version, you can delete the old repository: Additional resources rpm-ostree update and rebase fails with failed to find kernel error (Red Hat Knowledgebase) 15.4. Deploying RHEL for Edge automatic image updates After you install a RHEL for Edge image on an Edge device, you can check for image updates available, if any, and can auto-apply them. The rpm-ostreed-automatic.service (systemd service) and rpm-ostreed-automatic.timer (systemd timer) control the frequency of checks and upgrades. The available updates, if any, appear as staged deployments. Deploying automatic image updates involves the following high-level steps: Update the image update policy Enable automatic download and staging of updates 15.4.1. Updating the RHEL for Edge image update policy To update the image update policy, use the AutomaticUpdatePolicy and an IdleExitTimeout setting from the rpm-ostreed.conf file at /etc/rpm-ostreed.conf location on an Edge device. The AutomaticUpdatePolicy settings controls the automatic update policy and has the following update checks options: none : Disables automatic updates. By default, the AutomaticUpdatePolicy setting is set to none . check : Downloads enough metadata to display available updates with rpm-ostree status. stage : Downloads and unpacks the updates that are applied on a reboot. The IdleExitTimeout setting controls the time in seconds of inactivity before the daemon exit and has the following options: 0: Disables auto-exit. 60: By default, the IdleExitTimeout setting is set to 60 . To enable automatic updates, perform the following steps: Procedure In the /etc/rpm-ostreed.conf file, update the following: Change the value of AutomaticUpdatePolicy to check . To run the update checks, specify a value in seconds for IdleExitTimeout . Reload the rpm-ostreed service and enable the systemd timer. Verify the rpm-ostree status to ensure the automatic update policy is configured and time is active. The command output shows the following: Additionally, the output also displays information about the available updates. 15.4.2. Enabling RHEL for Edge automatic download and staging of updates After you update the image update policy to check for image updates, the updates if any are displayed along with the update details. If you decide to apply the updates, enable the policy to automatically download and stage the updates. The available image updates are then downloaded and staged for deployment. The updates are applied and take effect when you reboot the Edge device. To enable the policy for automatic download and staging of updates, perform the following updates: Procedure In the /etc/rpm-ostreed.conf file, update "AutomaticUpdatePolicy" to stage . Reload the rpm-ostreed service. Verify the rpm-ostree status The command output shows the following: To initiate the updates, you can either wait for the timer to initiate the updates, or can manually start the service. After the updates are initiated, the rpm-ostree status shows the following: When the update is complete, a new deployment is staged in the list of deployments, and the original booted deployment is left untouched. You can decide if you want to boot the system using the new deployment or can wait for the update. To view the list of deployments, run the rpm-ostree status command. Following is a sample output. To view the list of deployments with the updated package details, run the rpm-ostree status -v command. 15.5. Rolling back RHEL for Edge images Because RHEL for Edge applies transactional updates to the operating system, you can either manually or automatically roll back the unsuccessful updates to the last known good state, which prevents system failure during updates. You can automate the verification and rollback process by using the greenboot framework. The greenboot health check framework leverages rpm-ostree to run custom health checks on system startup. In case of an issue, the system rolls back to the last working state. When you deploy a rpm-ostree update, it runs scripts to check that critical services can still work after the update. If the system does not work, for example, due to some failed package, you can roll back the system to a stable version of the system. This process ensures that your RHEL for Edge device is in an operational state. After you update an image, it creates a new image deployment while preserving the image deployment. You can verify whether the update was successful. If the update is unsuccessful, for example, due to a failed package, you can roll back the system to a stable version. 15.5.1. Introducing the greenboot checks Greenboot is a Generic Health Check Framework for systemd available on rpm-ostree based systems. It contains the following RPM packages that you can install on your system: greenboot - a package that contains the following functionalities: Checking provided scripts Reboot the system if the check fails Rollback to a deployment the reboot did not solve the issue. greenboot-default-health-checks - a set of optional and selected health checks provided by your greenboot system maintainers. Greenboot works in a RHEL for Edge system by using health check scripts that run on the system to assess the system health and automate a rollback to the last healthy state in case of some software fails. These health checks scripts are available in the /etc/greenboot/check/required.d directory. Greenboot supports shell scripts for the health checks. Having a health check framework is especially useful when you need to check for software problems and perform system rollbacks on edge devices where direct serviceability is either limited or non-existent. When you install and configure health check scripts, it triggers the health checks to run every time the system starts. You can create your own health check scripts to assess the health of your workloads and applications. These additional health check scripts are useful components of software problem checks and automatic system rollbacks. Note You cannot use rollback in case of any health check failure on a system that is not using OSTree. 15.5.2. RHEL for Edge images roll back with greenboot With RHEL for Edge images, only transactional updates are applied to the operating system. The transactional updates are atomic, which means that the updates are applied only if all the updates are successful, and there is support for rollbacks. With the transactional updates, you can easily rollback the unsuccessful updates to the last known good state, preventing system failure during updates. Performing health checks is especially useful when you need to check for software problems and perform system rollbacks on edge devices where direct serviceability is limited or non-existent. Note You cannot use rollback in case of an update failure on a system that is not using OSTree, even if health checks might run. You can use intelligent rollbacks with the greenboot health check framework to automatically assess system health every time the system starts. You can obtain pre-configured health from the greenboot-default-health-checks subpackage. These checks are located in the /usr/lib/greenboot/check read-only directory in rpm-ostree systems. Greenboot leverages rpm-ostree and runs custom health checks that run on system startup. In case of an issue, the system rolls back the changes and preserves the last working state. When you deploy an rpm-ostree update, it runs scripts to check that critical services can still work after the update. If the system does not work, the update rolls back to the last known working version of the system. This process ensures that your RHEL for Edge device is in an operational state. You can obtain pre-configured health from the greenboot-default-health-checks`subpackage. These checks are located in the `/usr/lib/greenboot/check read-only directory in rpm-ostree systems. You can also configure shell scripts as the following types of checks: Example 15.1. The greenboot directory structure Required Contains the health checks that must not fail. Place required shell scripts in the /etc/greenboot/check/required.d directory. If the scripts fail, greenboot retries them three times by default. You can configure the number of retries in the /etc/greenboot/greenboot.conf file by setting the GREENBOOT_MAX_BOOTS parameter to the number of retries you want. After all retries fail, greenboot automatically initiates a rollback if one is available. If a rollback is not available, the system log output shows that you need to perform a manual intervention. Wanted Contains the health checks that might fail without causing the system to roll back. Place wanted shell scripts in the /etc/greenboot/check/wanted.d directory. Greenboot informs that the script fails, the system health status remains unaffected and it does not perform a rollback neither a reboot. You can also specify shell scripts that will run after a check: Green Contains the scripts to run after a successful boot. Place these scripts into the /etc/greenboot/green.d`directory . Greenboot informs that the boot was successful. Red Contains the scripts to run after a failed boot. Place these scripts into the /etc/greenboot/red.d directory. The system attempts to boot three times and in case of failure, it executes the scripts. Greenboot informs that the boot failed. The following diagram illustrates the RHEL for Edge image roll back process. After booting the updated operating system, greenboot runs the scripts in the required.d and wanted.d directories. If any of the scripts fail in the required.d directory, greenboot runs any scripts in the red.d directory, and then reboots the system. Greenboot makes 2 more attempts to boot on the upgraded system. If during the third boot attempt the scripts in required.d are still failing, greenboot runs the red.d scripts one last time, to ensure that the script in the red.d directory tried to make a corrective action to fix the issue and this was not successful. Then, greenboot rollbacks the system from the current rpm-ostree deployment to the stable deployment. 15.5.3. Greenboot health check status When deploying your updated system, wait until the greenboot health checks have finished before making the changes to ensure that those changes are not lost if greenboot rolls the system back to an earlier state. If you want to make configuration changes or deploy applications you must wait until the greenboot health checks have finished. This ensures that your changes are not lost if greenboot rolls your rpm-ostree system back to an earlier state. The greenboot-healthcheck service runs once and then exits. You can check the status of the service to know if it is done, and to know the outcome, by using the following commands: systemctl is-active greenboot-healthcheck.service This command reports active when the service has exited. If it the service did not even run it shows inactive . systemctl show --property=SubState --value greenboot-healthcheck.service Reports exited when done, running while still running. systemctl show --property=Result --value greenboot-healthcheck.service Reports success when the checks passed. systemctl show --property=ExecMainStatus --value greenboot-healthcheck.service Reports the numerical exit code of the service, 0 means success and nonzero values mean a failure occurred. cat /run/motd.d/boot-status Shows a message, such as "Boot Status is GREEN - Health Check SUCCESS". 15.5.4. Checking greenboot health checks statuses Check the status of greenboot health checks before making changes to the system or during troubleshooting. You can use any of the following commands to help you ensure that greenboot scripts have finished running. Use one of the following options to check the statuses: To see a report of health check status, enter: USD systemctl show --property=SubState --value greenboot-healthcheck.service The following outputs are possible: start means that greenboot checks are still running. exited means that checks have passed and greenboot has exited. Greenboot runs the scripts in the green.d directory when the system is in a healthy state. failed means that checks have not passed. Greenboot runs the scripts in red.d directory when the system is in this state and might restart the system. To see a report showing the numerical exit code of the service, where 0 means success and nonzero values mean a failure occurred, use the following command: USD systemctl show --property=ExecMainStatus --value greenboot-healthcheck.service To see a report showing a message about boot status, such as Boot Status is GREEN - Health Check SUCCESS , enter: USD cat /run/motd.d/boot-status 15.5.5. Manually rolling back RHEL for Edge images When you upgrade your operating system, a new deployment is created, and the rpm-ostree package also keeps the deployment. If there are issues on the updated version of the operating system, you can manually roll back to the deployment with a single rpm-ostree command, or by selecting the deployment in the GRUB boot loader. To manually roll back to a version, perform the following steps. Prerequisite You updated your system and it failed. Procedure Optional: Check for the fail error message: Run the rollback command: The command output provides details about the commit ID that is being moved and indicates a completed transaction with the details of the package being removed. Reboot the system. The command activates the commit with the stable content. The changes are applied and the version is restored. 15.5.6. Rolling back RHEL for Edge images using an automated process Greenboot checks provides a framework that is integrated into the boot process and can trigger rpm-ostree rollbacks when a health check fails. For the health checks, you can create a custom script that indicates whether a health check passed or failed. Based on the result, you can decide when a rollback should be triggered. The following procedure shows how to create an health check script example: Procedure Create a script that returns a standard exit code 0 . For example, the following script ensures that the configured DNS server is available: Include an executable file for the health checks at /etc/greenboot/check/required.d/ . During the reboot, the script is executed as part of the boot process, before the system enters the boot-complete.target unit. If the health checks are successful, no action is taken. If the health checks fail, the system will reboot several times, before marking the update as failed and rolling back to the update. Verification To check if the default gateway is reachable, run the following health check script: Create a script that returns a standard exit code 0 . Include an executable file for the health checks at /etc/greenboot/check/required.d/ directory.
[ "composer-cli blueprints save BLUEPRINT-NAME", "composer-cli blueprints push BLUEPRINT-NAME.toml", "composer-cli blueprints show BLUEPRINT-NAME", "composer-cli blueprints depsolve BLUEPRINT-NAME", "composer-cli compose start-ostree --ref rhel/9/x86_64/edge --url http://localhost:8080/repo <blueprint-name> edge-commit", "composer-cli compose image <UUID>", "tar -xf UUID .tar -C /var/tmp", "ostree --repo=/var/tmp/repo log rhel/9/x86_64/edge commit d523ef801e8b1df69ddbf73ce810521b5c44e9127a379a4e3bba5889829546fa Parent: f47842de7e6859cee07d743d3c67949420874727883fa9dbbaeb5824ad949d91 ContentChecksum: f0f6703696331b661fa22d97358db48ba5f8b62711d9db83a00a79b3ae0dfe16 Date: 2023-06-04 20:22:28 /+0000 Version: 9", "sudo ostree --repo=/var/srv/httpd/repo pull-local /var/tmp/repo 20 metadata, 22 content objects imported; 0 bytes content written", "ostree --repo=/var/srv/httpd/repo log rhel/9/x86_64/edge commit d523ef801e8b1df69ddbf73ce810521b5c44e9127a379a4e3bba5889829546fa Parent: f47842de7e6859cee07d743d3c67949420874727883fa9dbbaeb5824ad949d91 ContentChecksum: f0f6703696331b661fa22d97358db48ba5f8b62711d9db83a00a79b3ae0dfe16 Date: 2023-06-04 20:22:28 /+0000 Version: 9 (no subject) commit f47842de7e6859cee07d743d3c67949420874727883fa9dbbaeb5824ad949d91 ContentChecksum: 9054de3fe5f1210e3e52b38955bea0510915f89971e3b1ba121e15559d5f3a63 Date: 2023-06-04 20:01:08 /+0000 Version: 9 (no subject)", "tar -xf UUID-commit.tar -C UPGRADE_FOLDER", "ostree --repo=/usr/share/nginx/html/repo pull-local UPGRADE_FOLDER ostree --repo=/usr/share/nginx/html/repo summary -u", "rpm-ostree status", "rpm-ostree upgrade --check", "rpm-ostree upgrade", "rpm-ostree status", "systemctl reboot", "rpm-ostree status", "rpm-ostree db diff parent_commit new_commit", "composer-cli compose start-ostree --ref ostree_ref --url URL-OSTree-repository -blueprint_name_ image-type", "composer-cli compose status", "tar -xf UUID-commit.tar -C upgrade_folder", "ostree --repo=/var/www/html/repo pull-local /tmp/ostree-commit/repo ostree --repo=/var/www/html/repo summary -u", "rpm-ostree status", "rpm-ostree upgrade --check", "rpm-ostree upgrade", "rpm-ostree status", "systemctl reboot", "rpm-ostree status", "rpm-ostree db diff parent_commit new_commit", "cat ./child-commit_ID-container.tar | sudo podman load", "sudo podman run -p 8080:8080 localhost/edge-test", "ostree --repo=/var/www/html/repo pull-local /tmp/ostree-commit/repo ostree --repo=/var/www/html/repo summary -u", "rpm-ostree status", "rpm-ostree upgrade --check", "rpm-ostree upgrade", "rpm-ostree status", "systemctl reboot", "rpm-ostree upgrade", "rpm-ostree status", "rpm-ostree db diff parent_commit new_commit", "sudo composer-cli compose start blueprint-name edge-commit", "sudo composer-cli compose start-ostree --ref rhel/8/x86_64/edge --parent parent-OSTree-REF --url URL blueprint-name edge-commit", "sudo tar -xf image_file -C /var/www/html", "systemctl start httpd.service", "sudo cat /etc/ostree/remotes.d/edge.conf", "sudo ostree remote show-url edge", "ostree remote refs edge", "Error: Remote refs not available; server has no summary file", "sudo ostree remote add \\ --no-gpg-verify rhel9 http://192.168.122.1/repo/", "sudo cat /etc/ostree/remotes.d/ edge .conf [remote \"edge\"] url=http://192.168.122.1/ostree/repo/ gpg-verify=false", "sudo cat /etc/ostree/remotes.d/ rhel9 .conf [remote \"edge\"] url=http://192.168.122.1/repo/ gpg-verify=false", "sudo ostree remote show-url rhel9 http://192.168.122.1/ostree-rhel9/repo/", "sudo ostree remote list output: edge rhel9", "rpm-ostree rebase rhel9:rhel/9/x86_64/edge", "systemctl reboot", "rpm-ostree status", "rpm-ostree status", "top", "sudo rpm-ostree rollback", "systemctl reboot", "sudo ostree remote delete edge", "systemctl reload rpm-ostreed systemctl enable rpm-ostreed-automatic.timer --now", "rpm-ostree status", "State: idle; auto updates enabled (check; last run <minutes> ago)", "systemctl enable rpm-ostreed-automatic.timer --now", "rpm-ostree status", "State: idle AutomaticUpdates: stage; rpm-ostreed-automatic.timer: last run <time> ago", "systemctl start rpm-ostreed-automatic.service", "rpm-ostree status State: busy AutomaticUpdates: stage; rpm-ostreed-automatic.service: running Transaction: automatic (stage)", "rpm-ostree status State: idle AutomaticUpdates: stage; rpm-ostreed-automatic.timer: last run <time> ago Deployments:", "etc └─ greenboot ├─ check | └─ required.d | └─ init .py └─ green.d └─ red.d", "systemctl show --property=SubState --value greenboot-healthcheck.service", "systemctl show --property=ExecMainStatus --value greenboot-healthcheck.service", "cat /run/motd.d/boot-status", "journalctl -u greenboot-healthcheck.service.", "rpm-ostree rollback", "systemctl reboot", "#!/bin/bash DNS_SERVER=USD(grep ^nameserver /etc/resolv.conf | head -n 1 | cut -f2 -d\" \") COUNT=0 check DNS server is available ping -c1 USDDNS_SERVER while [ USD? != '0' ] && [ USDCOUNT -lt 10 ]; do ((COUNT++)) echo \"Checking for DNS: Attempt USDCOUNT .\" sleep 10 ping -c 1 USDDNS_SERVER done", "chmod +x check-dns.sh", "#!/bin/bash DEF_GW=USD(ip r | awk '/^default/ {print USD3}') SCRIPT=USD(basename USD0) count=10 connected=0 ping_timeout=5 interval=5 while [ USDcount -gt 0 -a USDconnected -eq 0 ]; do echo \"USDSCRIPT: Pinging default gateway USDDEF_GW\" ping -c 1 -q -W USDping_timeout USDDEF_GW > /dev/null 2>&1 && connected=1 || sleep USDinterval ((--count)) done if [ USDconnected -eq 1 ]; then echo \"USDSCRIPT: Default gateway USDDEF_GW is reachable.\" exit 0 else echo \"USDSCRIPT: Failed to ping default gateway USDDEF_GW!\" 1>&2 exit 1 fi", "chmod +x check-gw.sh" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_installing_and_managing_rhel_for_edge_images/managing-rhel-for-edge-images_composing-installing-managing-rhel-for-edge-images
Chapter 4. Updating the Red Hat Virtualization Manager
Chapter 4. Updating the Red Hat Virtualization Manager Prerequisites The data center compatibility level must be set to the latest version to ensure compatibility with the updated storage version. Procedure On the Manager machine, check if updated packages are available: Update the setup packages: # yum update ovirt\*setup\* rh\*vm-setup-plugins Update the Red Hat Virtualization Manager with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service. When the script completes successfully, the following message appears: Note The engine-setup script is also used during the Red Hat Virtualization Manager installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup . Important The update process might take some time. Do not stop the process before it completes. Update the base operating system and any optional packages installed on the Manager: Important If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict) . Important If any kernel packages were updated, reboot the machine to complete the update.
[ "engine-upgrade-check", "yum update ovirt\\*setup\\* rh\\*vm-setup-plugins", "engine-setup", "Execution of setup completed successfully", "yum update --nobest" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/migrating_from_a_standalone_manager_to_a_self-hosted_engine/Updating_the_Red_Hat_Virtualization_Manager_migrating_to_SHE
Chapter 6. Installing a cluster on Azure with network customizations
Chapter 6. Installing a cluster on Azure with network customizations In OpenShift Container Platform version 4.14, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Microsoft Azure. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 6.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Additional resources Installation configuration parameters for Azure 6.5.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.5.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 6.1. Machine types based on 64-bit x86 architecture standardBSFamily standardDADSv5Family standardDASv4Family standardDASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHCSFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 6.5.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 6.2. Machine types based on 64-bit ARM architecture standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 6.5.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 6.5.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 6.5.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19 1 10 15 17 Required. The installation program prompts you for this value. 2 6 11 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 12 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 13 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 14 Specify the name of the resource group that contains the DNS zone for your base domain. 16 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 18 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 19 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.5.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.6. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 6.7. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 6.8. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 6.8.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 6.2. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 6.3. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. You can change this value by migrating from OpenShift SDN to OVN-Kubernetes. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 6.4. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 6.5. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 6.6. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 6.7. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 6.8. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 6.9. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 6.10. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 6.11. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 6.12. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 6.9. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations. Note This configuration is necessary to run both Linux and Windows nodes in the same cluster. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR must not overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Note For more information about using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 6.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.11. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 6.11.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 6.11.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 6.11.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Microsoft Azure account for the ccoctl utility to use with the following permissions: Example 6.3. Required Azure permissions Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 6.11.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 6.11.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 6.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 6.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.15. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4", "controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "az login", "ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7", "ls <path_to_ccoctl_output_dir>/manifests", "azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_azure/installing-azure-network-customizations
6.4. Virtualization
6.4. Virtualization qemu-kvm component, BZ# 1159613 If a virtio device is created where the number of vectors is set to a value higher than 32, the device behaves as if it was set to a zero value on Red Hat Enterprise Linux 6, but not on Enterprise Linux 7. The resulting vector setting mismatch causes a migration error if the number of vectors on any virtio device on either platform is set to 33 or higher. It is, therefore, not recommended to set the vector value to be greater than 32. qemu-kvm component, BZ# 1027582 Microsoft Windows 8.1 and Microsoft Windows Server 2012 R2 require some CPU features, for example Compare Exchange 8Byte and Compare Exchange 16Byte, which are not present in all qemu-kvm CPU models. As a consequence, Microsoft Windows 8.1 and Microsoft Windows Server 2012 R2 guests do not boot if they use the following CPU model definitions: Opteron_G1, Conroe, and kvm64. To work around this problem, use CPU models that include the features required by Microsoft Windows 8.1 and Microsoft Windows Server 2012 R2, for example Penryn, Nehalem, Westmere, SandyBridge, Haswell, Opteron_G2, Opteron_G3, Opteron_G4, or Opteron_G5. kernel component, BZ# 1025868 KVM (Kernel-based Virtual Machine) cannot handle the values written in the MSR_IA32_MC4_CTL preprocessor macro by Linux guests when using some CPU or family model values. As a consequence, kernel panic occurs when booting on Red Hat Enterprise Linux 4 guests. Red Hat Enterprise Linux 5 and later incorrectly ignore certain exceptions so they are not affected. To work around this problem, use the nomce kernel command-line option on the guest, which disables MCE support. Alternatively, use a different CPU model name on the virtual machine configuration. As a result, guests boot as expected and kernel panic no longer occurs. kernel component, BZ# 1035571 After alternately hot plugging and unplugging SCSI disks more then three times, the guest displays information about the incorrect SCSI disk that has been removed. The work around this problem, the guest can need to wait for up to three minutes before it can rescan the bus to obtain correct information of the changed device. virtio-win component When upgrading the NetKVM driver through the Windows Device Manager, the old registry values are not removed. As a consequence, for example, non-existent parameters may be available. qemu-kvm component When working with very large images (larger than 2TB) created with very small cluster sizes (for example, 512bytes), block I/O errors can occur due to timeouts in qemu. To prevent this problem from occurring, use the default cluster size of 64KiB or larger. kernel component On Microsoft Windows Server 2012 containing large dynamic VHDX (Hyper-V virtual hard disk) files and using the ext3 file system, a call trace can appear, and, consequently, it is not possible to shut down the guest. To work around this problem, use the ext4 file system or set a logical block size of 1MB when creating a VHDX file. Note that this can only be done by using Microsoft PowerShell as the Hyper-V manager does not expose the -BlockSizeBytes option which has the default value of 32MB. To create a dynamix VHDX file with an approximate size of 2.5TB and 1MB block size run: libvirt component The storage drivers do not support the virsh vol-resize command options --allocate and --shrink . Use of the --shrink option will result in the following error message: Use of the --allocate option will result in the following error message: Shrinking a volume's capacity is possible as long as the value provided on the command line is greater than the volume allocation value as seen with the virsh vol-info command. You can shrink an existing volume by name through the followind sequence of steps: Dump the XML of the larger volume into a file using the vol-dumpxml . Edit the file to change the name, path, and capacity values, where the capacity must be greater than or equal to the allocation. Create a temporary smaller volume using the vol-create with the edited XML file. Back up and restore the larger volumes data using the vol-download and vol-upload commands to the smaller volume. Use the vol-delete command to remove the larger volume. Use the vol-clone command to restore the name from the larger volume. Use the vol-delete command to remove the temporary volume. In order to allocate more space on the volume, follow a similar sequence, but adjust the allocation to a larger value than the existing volume. virtio-win component It is not possible to downgrade a driver using the Search for the best driver in these locations option because the newer and installed driver will be selected as the "best" driver. If you want to force installation of a particular driver version, use the Don't search option and the Have Disk button to select the folder of the older driver. This method will allow you to install an older driver on a system that already has a driver installed. virtio-win component BZ# 1052845 Performing Automatic System Recovery (ASR) on Windows 2003 guest system with virtio-blk attached system disk fails. To work around this issue, the following files need to be copied from virtio-win floppy image to ASR floppy image : txtsetup.oem , disk1 , \i386(amd64)\Win2003\* kernel component There is a known issue with the Microsoft Hyper-V host. If a legacy network interface controller (NIC) is used on a multiple-CPU virtual machine, there is an interrupt problem in the emulated hardware when the IRQ balancing daemon is running. Call trace information is logged in the /var/log/messages file. libvirt component, BZ# 888635 Under certain circumstances, virtual machines try to boot from an incorrect device after a network boot failure. For more information, please refer to this article on Customer Portal. grubby component, BZ# 893390 When a Red Hat Enterprise Linux 6.4 guest updates the kernel and then the guest is turned off through Microsoft Hyper-V Manager, the guest fails to boot due to incomplete grub information. This is because the data is not synced properly to disk when the machine is turned off through Hyper-V Manager. To work around this problem, execute the sync command before turning the guest off. kernel component Using the mouse scroll wheel does not work on Red Hat Enterprise Linux 6.4 guests that run under certain version of Microsoft Hyper-V Manager. However, the scroll wheel works as expected when the vncviewer utility is used. kernel component, BZ# 874406 Microsoft Windows Server 2012 guests using the e1000 driver can become unresponsive consuming 100% CPU during boot or reboot. kernel component When a kernel panic is triggered on a Microsoft Hyper-V guest, the kdump utility does not capture the kernel error information; an error is only displayed on the command line. This is a host problem. Guest kdump works as expected on Microsoft Hyper-V 2012 R2 host. quemu-kvm component, BZ# 871265 AMD Opteron G1, G2 or G3 CPU models on qemu-kvm use the family and models values as follows: family=15 and model=6. If these values are larger than 20, the lahfm_lm CPU feature is ignored by Linux guests, even when the feature is enabled. To work around this problem, use a different CPU model, for example AMD Opteron G4. qemu-kvm component, BZ# 860929 KVM guests must not be allowed to update the host CPU microcode. KVM does not allow this, and instead always returns the same microcode revision or patch level value to the guest. If the guest tries to update the CPU microcode, it will fail and show an error message similar to: To work around this, configure the guest to not install CPU microcode updates; for example, uninstall the microcode_ctl package Red Hat Enterprise Linux of Fedora guests. virt-p2v component, BZ# 816930 Converting a physical server running either Red Hat Enterprise Linux 4 or Red Hat Enterprise Linux 5 which has its file system root on an MD device is not supported. Converting such a guest results in a guest which fails to boot. Note that conversion of a Red Hat Enterprise Linux 6 server which has its root on an MD device is supported. virt-p2v component, BZ# 808820 When converting a physical host with a multipath storage, Virt-P2V presents all available paths for conversion. Only a single path must be selected. This must be a currently active path. virtio-win component, BZ# 615928 The balloon service on Windows 7 guests can only be started by the Administrator user. virtio-win component, BZ# 612801 A Windows virtual machine must be restarted after the installation of the kernel Windows driver framework. If the virtual machine is not restarted, it may crash when a memory balloon operation is performed. qemu-kvm component, BZ# 720597 Installation of Windows 7 Ultimate x86 (32-bit) Service Pack 1 on a guest with more than 4GB of RAM and more than one CPU from a DVD medium can lead to the system being unresponsive and, consequently, to a crash during the final steps of the installation process. To work around this issue, use the Windows Update utility to install the Service Pack. qemu-kvm component, BZ# 612788 A dual function Intel 82576 Gigabit Ethernet Controller interface (codename: Kawela, PCI Vendor/Device ID: 8086:10c9) cannot have both physical functions (PF's) device-assigned to a Windows 2008 guest. Either physical function can be device assigned to a Windows 2008 guest (PCI function 0 or function 1), but not both. virt-v2v component, BZ# 618091 The virt-v2v utility is able to convert guests running on an ESX server. However, if an ESX guest has a disk with a snapshot, the snapshot must be on the same datastore as the underlying disk storage. If the snapshot and the underlying storage are on different datastores, virt-v2v will report a 404 error while trying to retrieve the storage. virt-v2v component, BZ# 678232 The VMware Tools application on Microsoft Windows is unable to disable itself when it detects that it is no longer running on a VMware platform. Consequently, converting a Microsoft Windows guest from VMware ESX, which has VMware Tools installed, will result in errors. These errors usually manifest as error messages on start-up, and a "Stop Error" (also known as a BSOD) when shutting down the guest. To work around this issue, uninstall VMware Tools on Microsoft Windows guests prior to conversion. libguestfs component The libguestfs packages do not support remote access to disks over the network in Red Hat Enterprise Linux 6. Consequently, the virt-sysprep tool as well as other tools do not work with remote disks. Users who need to access disks remotely with tools such as virt-sysprep are advised to upgrade to Red Hat Enterprise Linux 7.
[ "New-VHD -Path .\\MyDisk.vhdx -SizeBytes 5120MB -BlockSizeBytes 1MB -Dynamic", "error: invalid argument: storageVolumeResize: unsupported flags (0x4)", "error: invalid argument: storageVolumeResize: unsupported flags (0x1)", "CPU0: update failed (for patch_level=0x6000624)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/virtualization_issues
Chapter 41. Online Storage Configuration Troubleshooting
Chapter 41. Online Storage Configuration Troubleshooting This section provides solution to common problems users experience during online storage reconfiguration. Logical unit removal status is not reflected on the host. When a logical unit is deleted on a configured filer, the change is not reflected on the host. In such cases, lvm commands will hang indefinitely when dm-multipath is used, as the logical unit has now become stale . To work around this, perform the following procedure: Procedure 41.1. Working Around Stale Logical Units Determine which mpath link entries in /etc/lvm/cache/.cache are specific to the stale logical unit. To do this, run the following command: Example 41.1. Determine specific mpath link entries For example, if stale-logical-unit is 3600d0230003414f30000203a7bc41a00, the following results may appear: This means that 3600d0230003414f30000203a7bc41a00 is mapped to two mpath links: dm-4 and dm-5 . , open /etc/lvm/cache/.cache . Delete all lines containing stale-logical-unit and the mpath links that stale-logical-unit maps to. Example 41.2. Delete relevant lines Using the same example in the step, the lines you need to delete are:
[ "ls -l /dev/mpath | grep stale-logical-unit", "lrwxrwxrwx 1 root root 7 Aug 2 10:33 /3600d0230003414f30000203a7bc41a00 -> ../dm-4 lrwxrwxrwx 1 root root 7 Aug 2 10:33 /3600d0230003414f30000203a7bc41a00p1 -> ../dm-5", "/dev/dm-4 /dev/dm-5 /dev/mapper/3600d0230003414f30000203a7bc41a00 /dev/mapper/3600d0230003414f30000203a7bc41a00p1 /dev/mpath/3600d0230003414f30000203a7bc41a00 /dev/mpath/3600d0230003414f30000203a7bc41a00p1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/troubleshooting
Support
Support OpenShift Container Platform 4.14 Getting support for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "oc api-resources -o name | grep config.openshift.io", "oc explain <resource_name>.config.openshift.io", "oc get <resource_name>.config -o yaml", "oc edit <resource_name>.config -o yaml", "oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'", "curl -G -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://USD(oc get route prometheus-k8s-federate -n openshift-monitoring -o jsonpath=\"{.spec.host}\")/federate --data-urlencode 'match[]={__name__=~\"cluster:usage:.*\"}' --data-urlencode 'match[]={__name__=\"count:up0\"}' --data-urlencode 'match[]={__name__=\"count:up1\"}' --data-urlencode 'match[]={__name__=\"cluster_version\"}' --data-urlencode 'match[]={__name__=\"cluster_version_available_updates\"}' --data-urlencode 'match[]={__name__=\"cluster_version_capability\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_up\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_conditions\"}' --data-urlencode 'match[]={__name__=\"cluster_version_payload\"}' --data-urlencode 'match[]={__name__=\"cluster_installer\"}' --data-urlencode 'match[]={__name__=\"cluster_infrastructure_provider\"}' --data-urlencode 'match[]={__name__=\"cluster_feature_set\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_object_counts:sum\"}' --data-urlencode 'match[]={__name__=\"ALERTS\",alertstate=\"firing\"}' --data-urlencode 'match[]={__name__=\"code:apiserver_request_total:rate:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_memory_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"workload:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"workload:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:virt_platform_nodes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:node_instance_type_count:sum\"}' --data-urlencode 'match[]={__name__=\"cnv:vmi_status_running:count\"}' --data-urlencode 'match[]={__name__=\"cluster:vmi_request_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_sockets:sum\"}' --data-urlencode 'match[]={__name__=\"subscription_sync_total\"}' --data-urlencode 'match[]={__name__=\"olm_resolution_duration_seconds\"}' --data-urlencode 'match[]={__name__=\"csv_succeeded\"}' --data-urlencode 'match[]={__name__=\"csv_abnormal\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_used_raw_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_health_status\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_total_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_used_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_health_status\"}' --data-urlencode 'match[]={__name__=\"job:ceph_osd_metadata:count\"}' --data-urlencode 'match[]={__name__=\"job:kube_pv:count\"}' --data-urlencode 'match[]={__name__=\"job:odf_system_pvs:count\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops_bytes:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_versions_running:count\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_unhealthy_buckets:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_bucket_count:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_object_count:sum\"}' --data-urlencode 'match[]={__name__=\"odf_system_bucket_count\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"odf_system_objects_total\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"noobaa_accounts_num\"}' --data-urlencode 'match[]={__name__=\"noobaa_total_usage\"}' --data-urlencode 'match[]={__name__=\"console_url\"}' --data-urlencode 'match[]={__name__=\"cluster:ovnkube_master_egress_routing_via_host:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_instances:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_enabled_instance_up:max\"}' --data-urlencode 'match[]={__name__=\"cluster:ingress_controller_aws_nlb_active:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:min\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:max\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:avg\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:median\"}' --data-urlencode 'match[]={__name__=\"cluster:openshift_route_info:tls_termination:sum\"}' --data-urlencode 'match[]={__name__=\"insightsclient_request_send_total\"}' --data-urlencode 'match[]={__name__=\"cam_app_workload_migrations\"}' --data-urlencode 'match[]={__name__=\"cluster:apiserver_current_inflight_requests:sum:max_over_time:2m\"}' --data-urlencode 'match[]={__name__=\"cluster:alertmanager_integrations:max\"}' --data-urlencode 'match[]={__name__=\"cluster:telemetry_selected_series:count\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_series:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_samples_appended_total:sum\"}' --data-urlencode 'match[]={__name__=\"monitoring:container_memory_working_set_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_series_added:topk3_sum1h\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_samples_post_metric_relabeling:topk3\"}' --data-urlencode 'match[]={__name__=\"monitoring:haproxy_server_http_responses_total:sum\"}' --data-urlencode 'match[]={__name__=\"rhmi_status\"}' --data-urlencode 'match[]={__name__=\"status:upgrading:version:rhoam_state:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_critical_alerts:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_warning_alerts:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_percentile:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_remaining_error_budget:max\"}' --data-urlencode 'match[]={__name__=\"cluster_legacy_scheduler_policy\"}' --data-urlencode 'match[]={__name__=\"cluster_master_schedulable\"}' --data-urlencode 'match[]={__name__=\"che_workspace_status\"}' --data-urlencode 'match[]={__name__=\"che_workspace_started_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_failure_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_sum\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_count\"}' --data-urlencode 'match[]={__name__=\"cco_credentials_mode\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolume_plugin_type_counts:sum\"}' --data-urlencode 'match[]={__name__=\"visual_web_terminal_sessions_total\"}' --data-urlencode 'match[]={__name__=\"acm_managed_cluster_info\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_vcenter_info:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_esxi_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_node_hw_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:build_by_strategy:sum\"}' --data-urlencode 'match[]={__name__=\"rhods_aggregate_availability\"}' --data-urlencode 'match[]={__name__=\"rhods_total_users\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_storage_types\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_strategies\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_agent_strategies\"}' --data-urlencode 'match[]={__name__=\"appsvcs:cores_by_product:sum\"}' --data-urlencode 'match[]={__name__=\"nto_custom_profiles:count\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_configmap\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_secret\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_failures_total\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_requests_total\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_backup_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_restore_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_storage_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_redundancy_policy_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_defined_delete_namespaces_total\"}' --data-urlencode 'match[]={__name__=\"eo_es_misconfigured_memory_resources_info\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_data_nodes_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_created_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_deleted_total:sum\"}' --data-urlencode 'match[]={__name__=\"pod:eo_es_shards_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_cluster_management_state_info\"}' --data-urlencode 'match[]={__name__=\"imageregistry:imagestreamtags_count:sum\"}' --data-urlencode 'match[]={__name__=\"imageregistry:operations_count:sum\"}' --data-urlencode 'match[]={__name__=\"log_logging_info\"}' --data-urlencode 'match[]={__name__=\"log_collector_error_count_total\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_pipeline_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_input_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_output_info\"}' --data-urlencode 'match[]={__name__=\"cluster:log_collected_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:log_logged_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kata_monitor_running_shim_count:sum\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_hostedclusters:max\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_nodepools:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_bucket_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_buckets_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_accounts:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_usage:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_system_health_status:max\"}' --data-urlencode 'match[]={__name__=\"ocs_advanced_feature_usage\"}' --data-urlencode 'match[]={__name__=\"os_image_url_override:sum\"}'", "INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)", "oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data", "oc extract secret/pull-secret -n openshift-config --to=.", "\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"<email_address>\"}", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \" <email_address> \" } } }", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' > pull-secret", "cp pull-secret pull-secret-backup", "set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret", "oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running", "oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1", "{ \"name\": \"clusterconfig/authentication\", \"duration_in_ms\": 730, 1 \"records_count\": 1, \"errors\": null, \"panic\": null }", "apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: 1 gatherConfig: disabledGatherers: - all 2", "spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info", "apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: gatherConfig: 1 disabledGatherers: all", "spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info", "apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled", "oc apply -f <your_datagather_definition>.yaml", "apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled", "apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}]", "oc get -n openshift-insights deployment insights-operator -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: insights-operator namespace: openshift-insights spec: template: spec: containers: - args: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1", "apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job spec: template: spec: initContainers: - name: insights-operator image: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 terminationMessagePolicy: FallbackToLogsOnError volumeMounts:", "oc apply -n openshift-insights -f gather-job.yaml", "oc describe -n openshift-insights job/insights-operator-job", "Name: insights-operator-job Namespace: openshift-insights Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job-<your_job>", "oc logs -n openshift-insights insights-operator-job-<your_job> insights-operator", "I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms", "oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data", "oc delete -n openshift-insights job insights-operator-job", "oc extract secret/pull-secret -n openshift-config --to=.", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \"[email protected]\" } }", "curl -v -H \"User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> \" -H \"Authorization: Bearer <your_token> \" -F \"upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar\" https://console.redhat.com/api/ingress/v1/upload", "* Connection #0 to host console.redhat.com left intact {\"request_id\":\"393a7cf1093e434ea8dd4ab3eb28884c\",\"upload\":{\"account_number\":\"6274079\"}}%", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.11", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.11", "oc import-image is/must-gather -n openshift", "oc adm must-gather", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.11 2", "oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')", "├── cluster-logging │ ├── clo │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ ├── clusterlogforwarder_cr │ │ ├── cr │ │ ├── csv │ │ ├── deployment │ │ └── logforwarding_cr │ ├── collector │ │ ├── fluentd-2tr64 │ ├── eo │ │ ├── csv │ │ ├── deployment │ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4 │ ├── es │ │ ├── cluster-elasticsearch │ │ │ ├── aliases │ │ │ ├── health │ │ │ ├── indices │ │ │ ├── latest_documents.json │ │ │ ├── nodes │ │ │ ├── nodes_stats.json │ │ │ └── thread_pool │ │ ├── cr │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ └── logs │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ ├── install │ │ ├── co_logs │ │ ├── install_plan │ │ ├── olmo_logs │ │ └── subscription │ └── kibana │ ├── cr │ ├── kibana-9d69668d4-2rkvz ├── cluster-scoped-resources │ └── core │ ├── nodes │ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml │ └── persistentvolumes │ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml ├── event-filter.html ├── gather-debug.log └── namespaces ├── openshift-logging │ ├── apps │ │ ├── daemonsets.yaml │ │ ├── deployments.yaml │ │ ├── replicasets.yaml │ │ └── statefulsets.yaml │ ├── batch │ │ ├── cronjobs.yaml │ │ └── jobs.yaml │ ├── core │ │ ├── configmaps.yaml │ │ ├── endpoints.yaml │ │ ├── events │ │ │ ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml │ │ │ ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml │ │ ├── events.yaml │ │ ├── persistentvolumeclaims.yaml │ │ ├── pods.yaml │ │ ├── replicationcontrollers.yaml │ │ ├── secrets.yaml │ │ └── services.yaml │ ├── openshift-logging.yaml │ ├── pods │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ │ ├── cluster-logging-operator │ │ │ │ └── cluster-logging-operator │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff │ │ │ ├── cluster-logging-operator-registry │ │ │ │ └── cluster-logging-operator-registry │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ ├── elasticsearch-im-app-1596030300-bpgcx │ │ │ ├── elasticsearch-im-app-1596030300-bpgcx.yaml │ │ │ └── indexmanagement │ │ │ └── indexmanagement │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── fluentd-2tr64 │ │ │ ├── fluentd │ │ │ │ └── fluentd │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── fluentd-2tr64.yaml │ │ │ └── fluentd-init │ │ │ └── fluentd-init │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── kibana-9d69668d4-2rkvz │ │ │ ├── kibana │ │ │ │ └── kibana │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── kibana-9d69668d4-2rkvz.yaml │ │ │ └── kibana-proxy │ │ │ └── kibana-proxy │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ └── route.openshift.io │ └── routes.yaml └── openshift-operators-redhat ├──", "oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=quay.io/kubevirt/must-gather 2", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "oc adm must-gather -- gather_network_logs", "tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1", "oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'", "oc get nodes", "oc debug node/my-cluster-node", "oc new-project dummy", "oc patch namespace dummy --type=merge -p '{\"metadata\": {\"annotations\": { \"scheduler.alpha.kubernetes.io/defaultTolerations\": \"[{\\\"operator\\\": \\\"Exists\\\"}]\"}}}'", "oc debug node/my-cluster-node", "chroot /host", "toolbox", "sos report -k crio.all=on -k crio.logs=on -k podman.all=on -k podman.logs=on 1", "sos report --all-logs", "Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e", "oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1", "ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service", "ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'", "oc adm node-logs --role=master -u kubelet 1", "oc adm node-logs --role=master --path=openshift-apiserver", "oc adm node-logs --role=master --path=openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log", "oc adm must-gather --dest-dir /tmp/captures \\ <.> --source-dir '/tmp/tcpdump/' \\ <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \\ <.> --node-selector 'node-role.kubernetes.io/worker' \\ <.> --host-network=true \\ <.> --timeout 30s \\ <.> -- tcpdump -i any \\ <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300", "tmp/captures ├── event-filter.html ├── ip-10-0-192-217-ec2-internal 1 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:31.pcap ├── ip-10-0-201-178-ec2-internal 2 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:30.pcap ├── ip- └── timestamp", "oc get nodes", "oc debug node/my-cluster-node", "chroot /host", "ip ad", "toolbox", "tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1", "chroot /host crictl ps", "chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}'", "nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1", "oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1", "oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1", "chroot /host", "toolbox", "dnf install -y <package_name>", "chroot /host", "vi ~/.toolboxrc", "REGISTRY=quay.io 1 IMAGE=fedora/fedora:33-x86_64 2 TOOLBOX_NAME=toolbox-fedora-33 3", "toolbox", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.13.8 True False 8h Cluster version is 4.13.8", "oc describe clusterversion", "Name: version Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: ClusterVersion Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce URL: https://access.redhat.com/errata/RHSA-2023:4456 Version: 4.13.8 History: Completion Time: 2023-08-17T13:20:21Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce Started Time: 2023-08-17T12:59:45Z State: Completed Verified: false Version: 4.13.8", "ssh <user_name>@<load_balancer> systemctl status haproxy", "ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623'", "ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623'", "dig <wildcard_fqdn> @<dns_server>", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug 1", "./openshift-install create ignition-configs --dir=./install_dir", "tail -f ~/<installation_directory>/.openshift_install.log", "ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service", "oc adm node-logs --role=master -u kubelet", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=master -u crio", "ssh [email protected]_name.sub_domain.domain journalctl -b -f -u crio.service", "curl -I http://<http_server_fqdn>:<port>/bootstrap.ign 1", "grep -is 'bootstrap.ign' /var/log/httpd/access_log", "ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service", "ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'", "curl -I http://<http_server_fqdn>:<port>/master.ign 1", "grep -is 'master.ign' /var/log/httpd/access_log", "oc get nodes", "oc describe node <master_node>", "oc get daemonsets -n openshift-sdn", "oc get pods -n openshift-sdn", "oc logs <sdn_pod> -n openshift-sdn", "oc get network.config.openshift.io cluster -o yaml", "./openshift-install create manifests", "oc get pods -n openshift-network-operator", "oc logs pod/<network_operator_pod_name> -n openshift-network-operator", "oc adm node-logs --role=master -u kubelet", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=master -u crio", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service", "oc adm node-logs --role=master --path=openshift-apiserver", "oc adm node-logs --role=master --path=openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "curl https://api-int.<cluster_name>:22623/config/master", "dig api-int.<cluster_name> @<dns_server>", "dig -x <load_balancer_mco_ip_address> @<dns_server>", "ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master", "ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking", "openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text", "oc get pods -n openshift-etcd", "oc get pods -n openshift-etcd-operator", "oc describe pod/<pod_name> -n <namespace>", "oc logs pod/<pod_name> -n <namespace>", "oc logs pod/<pod_name> -c <container_name> -n <namespace>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd-", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>'", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "oc adm node-logs --role=master -u kubelet", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired'", "ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service | grep -is 'x509: certificate has expired'", "curl -I http://<http_server_fqdn>:<port>/worker.ign 1", "grep -is 'worker.ign' /var/log/httpd/access_log", "oc get nodes", "oc describe node <worker_node>", "oc get pods -n openshift-machine-api", "oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api", "oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator", "oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy", "oc adm node-logs --role=worker -u kubelet", "ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service", "oc adm node-logs --role=worker -u crio", "ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service", "oc adm node-logs --role=worker --path=sssd", "oc adm node-logs --role=worker --path=sssd/sssd.log", "ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log", "ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a", "ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "curl https://api-int.<cluster_name>:22623/config/worker", "dig api-int.<cluster_name> @<dns_server>", "dig -x <load_balancer_mco_ip_address> @<dns_server>", "ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker", "ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking", "openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text", "oc get clusteroperators", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc describe clusteroperator <operator_name>", "oc get pods -n <operator_namespace>", "oc describe pod/<operator_pod_name> -n <operator_namespace>", "oc logs pod/<operator_pod_name> -n <operator_namespace>", "oc get pod -o \"jsonpath={range .status.containerStatuses[*]}{.name}{'\\t'}{.state}{'\\t'}{.image}{'\\n'}{end}\" <operator_pod_name> -n <operator_namespace>", "oc adm release info <image_path>:<tag> --commits", "./openshift-install gather bootstrap --dir <installation_directory> 1", "./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address> 5", "INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"", "oc get nodes", "oc adm top nodes", "oc adm top node my-node", "oc debug node/my-node", "chroot /host", "systemctl is-active kubelet", "systemctl status kubelet", "oc adm node-logs --role=master -u kubelet 1", "oc adm node-logs --role=master --path=openshift-apiserver", "oc adm node-logs --role=master --path=openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log", "oc debug node/my-node", "chroot /host", "systemctl is-active crio", "systemctl status crio.service", "oc adm node-logs --role=master -u crio", "oc adm node-logs <node_name> -u crio", "ssh core@<node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service", "Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory", "can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks.", "oc adm cordon <node_name>", "oc adm drain <node_name> --ignore-daemonsets --delete-emptydir-data", "ssh [email protected] sudo -i", "systemctl stop kubelet", ".. for pod in USD(crictl pods -q); do if [[ \"USD(crictl inspectp USDpod | jq -r .status.linux.namespaces.options.network)\" != \"NODE\" ]]; then crictl rmp -f USDpod; fi; done", "crictl rmp -fa", "systemctl stop crio", "crio wipe -f", "systemctl start crio systemctl start kubelet", "oc get nodes", "NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.27.3", "oc adm uncordon <node_name>", "NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.27.3", "rpm-ostree kargs --append='crashkernel=256M'", "systemctl enable kdump.service", "systemctl reboot", "variant: openshift version: 4.14.0 metadata: name: 99-worker-kdump 1 labels: machineconfiguration.openshift.io/role: worker 2 openshift: kernel_arguments: 3 - crashkernel=256M storage: files: - path: /etc/kdump.conf 4 mode: 0644 overwrite: true contents: inline: | path /var/crash core_collector makedumpfile -l --message-level 7 -d 31 - path: /etc/sysconfig/kdump 5 mode: 0644 overwrite: true contents: inline: | KDUMP_COMMANDLINE_REMOVE=\"hugepages hugepagesz slub_debug quiet log_buf_len swiotlb\" KDUMP_COMMANDLINE_APPEND=\"irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable\" 6 KEXEC_ARGS=\"-s\" KDUMP_IMG=\"vmlinuz\" systemd: units: - name: kdump.service enabled: true", "nfs server.example.com:/export/cores core_collector makedumpfile -l --message-level 7 -d 31 extra_modules nfs", "butane 99-worker-kdump.bu -o 99-worker-kdump.yaml", "oc create -f 99-worker-kdump.yaml", "systemctl --failed", "journalctl -u <unit>.service", "NODEIP_HINT=192.0.2.1", "echo -n 'NODEIP_HINT=192.0.2.1' | base64 -w0", "Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx==", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-nodeip-hint-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-nodeip-hint-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration", "[connection] id=eno1 type=ethernet interface-name=eno1 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20", "[connection] id=eno2 type=ethernet interface-name=eno2 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20", "[connection] id=bond1 type=bond interface-name=bond1 autoconnect=true connection.autoconnect-slaves=1 autoconnect-priority=20 [bond] mode=802.3ad miimon=100 xmit_hash_policy=\"layer3+4\" [ipv4] method=auto", "base64 <directory_path>/en01.config", "base64 <directory_path>/eno2.config", "base64 <directory_path>/bond1.config", "export ROLE=<machine_role>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-USD{ROLE}-sec-bridge-cni spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:;base64,<base-64-encoded-contents-for-bond1.conf> path: /etc/NetworkManager/system-connections/bond1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno1.conf> path: /etc/NetworkManager/system-connections/eno1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno2.conf> path: /etc/NetworkManager/system-connections/eno2.nmconnection filesystem: root mode: 0600", "oc create -f <machine_config_file_name>", "bond1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-extra-bridge spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/ovnk/extra_bridge mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond1 filesystem: root", "oc create -f <machine_config_file_name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-br-ex-override spec: config: ignition: version: 3.2.0 storage: files: - path: /var/lib/ovnk/iface_default_hint mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond0 1 filesystem: root", "oc create -f <machine_config_file_name>", "oc get nodes -o json | grep --color exgw-ip-addresses", "\"k8s.ovn.org/l3-gateway-config\": \\\"exgw-ip-address\\\":\\\"172.xx.xx.yy/24\\\",\\\"next-hops\\\":[\\\"xx.xx.xx.xx\\\"],", "oc debug node/<node_name> -- chroot /host sh -c \"ip a | grep mtu | grep br-ex\"", "Starting pod/worker-1-debug To use host binaries, run `chroot /host` 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 6: br-ex1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000", "oc debug node/<node_name> -- chroot /host sh -c \"ip a | grep -A1 -E 'br-ex|bond0'", "Starting pod/worker-1-debug To use host binaries, run `chroot /host` sh-5.1# ip a | grep -A1 -E 'br-ex|bond0' 2: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff -- 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff inet 10.xx.xx.xx/21 brd 10.xx.xx.255 scope global dynamic noprefixroute br-ex", "E0514 12:47:17.998892 2281 daemon.go:1350] content mismatch for file /etc/systemd/system/ovs-vswitchd.service: [Unit]", "oc debug node/<node_name>", "chroot /host", "ovs-appctl vlog/list", "console syslog file ------- ------ ------ backtrace OFF INFO INFO bfd OFF INFO INFO bond OFF INFO INFO bridge OFF INFO INFO bundle OFF INFO INFO bundles OFF INFO INFO cfm OFF INFO INFO collectors OFF INFO INFO command_line OFF INFO INFO connmgr OFF INFO INFO conntrack OFF INFO INFO conntrack_tp OFF INFO INFO coverage OFF INFO INFO ct_dpif OFF INFO INFO daemon OFF INFO INFO daemon_unix OFF INFO INFO dns_resolve OFF INFO INFO dpdk OFF INFO INFO", "Restart=always ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /var/lib/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /etc/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /run/openvswitch' ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg", "systemctl daemon-reload", "systemctl restart ovs-vswitchd", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master 1 name: 99-change-ovs-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - dropins: - contents: | [Service] ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg 2 ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg name: 20-ovs-vswitchd-restart.conf name: ovs-vswitchd.service", "oc apply -f 99-change-ovs-loglevel.yaml", "oc adm node-logs <node_name> -u ovs-vswitchd", "journalctl -b -f -u ovs-vswitchd.service", "oc get subs -n <operator_namespace>", "oc describe sub <subscription_name> -n <operator_namespace>", "Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy", "oc get catalogsources -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m", "oc describe catalogsource example-catalog -n openshift-marketplace", "Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m", "oc describe pod example-catalog-bwt8z -n openshift-marketplace", "Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull", "oc get clusteroperators", "oc get pod -n <operator_namespace>", "oc describe pod <operator_pod_name> -n <operator_namespace>", "oc debug node/my-node", "chroot /host", "crictl ps", "crictl ps --name network-operator", "oc get pods -n <operator_namespace>", "oc logs pod/<pod_name> -n <operator_namespace>", "oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: true 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: false 1", "oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/master", "oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/worker", "oc get machineconfigpool/master --template='{{.spec.paused}}'", "oc get machineconfigpool/worker --template='{{.spec.paused}}'", "true", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False", "oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/master", "oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/worker", "oc get machineconfigpool/master --template='{{.spec.paused}}'", "oc get machineconfigpool/worker --template='{{.spec.paused}}'", "false", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"", "rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host", "oc get sub,csv -n <namespace>", "NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded", "oc delete subscription <subscription_name> -n <namespace>", "oc delete csv <csv_name> -n <namespace>", "oc get job,configmap -n openshift-marketplace", "NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s", "oc delete job <job_name> -n openshift-marketplace", "oc delete configmap <configmap_name> -n openshift-marketplace", "oc get sub,csv,installplan -n <namespace>", "message: 'Failed to delete all resource types, 1 remaining: Internal error occurred: error resolving resource'", "oc get namespaces", "operator-ns-1 Terminating", "oc get crds", "oc delete crd <crd_name>", "oc get EtcdCluster -n <namespace_name>", "oc get EtcdCluster --all-namespaces", "oc delete <cr_name> <cr_instance_name> -n <namespace_name>", "oc get namespace <namespace_name>", "oc get sub,csv,installplan -n <namespace>", "oc project <project_name>", "oc get pods", "oc status", "skopeo inspect docker://<image_reference>", "oc edit deployment/my-deployment", "oc get pods -w", "oc get events", "oc logs <pod_name>", "oc logs <pod_name> -c <container_name>", "oc exec <pod_name> -- ls -alh /var/log", "total 124K drwxr-xr-x. 1 root root 33 Aug 11 11:23 . drwxr-xr-x. 1 root root 28 Sep 6 2022 .. -rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp -rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log -rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log -rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log -rw-r--r--. 1 root root 480 Jul 17 10:07 hawkey.log -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 lastlog drwx------. 2 root root 23 Aug 11 11:14 openshift-apiserver drwx------. 2 root root 6 Jul 10 10:31 private drwxr-xr-x. 1 root root 22 Mar 9 08:05 rhsm -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 wtmp", "oc exec <pod_name> cat /var/log/<path_to_log>", "2023-07-10T10:29:38+0000 INFO --- logging initialized --- 2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms 2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile 2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories. 2023-07-10T10:29:38+0000 INFO Unable to read consumer identity 2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode. 2023-07-10T10:29:38+0000 INFO", "oc exec <pod_name> -c <container_name> ls /var/log", "oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>", "oc project <namespace>", "oc rsh <pod_name> 1", "oc rsh -c <container_name> pod/<pod_name>", "oc port-forward <pod_name> <host_port>:<pod_port> 1", "oc get deployment -n <project_name>", "oc debug deployment/my-deployment --as-root -n <project_name>", "oc get deploymentconfigs -n <project_name>", "oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>", "oc cp <local_path> <pod_name>:/<path> -c <container_name> 1", "oc cp <pod_name>:/<path> -c <container_name> <local_path> 1", "oc get pods -w 1", "oc logs -f pod/<application_name>-<build_number>-build", "oc logs -f pod/<application_name>-<build_number>-deploy", "oc logs -f pod/<application_name>-<build_number>-<random_string>", "oc describe pod/my-app-1-akdlg", "oc logs -f pod/my-app-1-akdlg", "oc exec my-app-1-akdlg -- cat /var/log/my-application.log", "oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log", "oc exec -it my-app-1-akdlg /bin/bash", "oc debug node/my-cluster-node", "chroot /host", "crictl ps", "crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print USD2}'", "nsenter -n -t 31150 -- ip ad", "Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume \"pvc-8837384d-69d7-40b2-b2e6-5df86943eef9\" Volume is already used by pod(s) sso-mysql-1-ns6b4", "oc delete pod <old_pod> --force=true --grace-period=0", "oc logs -f deployment/windows-machine-config-operator -n openshift-windows-machine-config-operator", "ssh -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -W %h:%p core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")' <username>@<windows_node_internal_ip> 1 2", "oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}", "ssh -L 2020:<windows_node_internal_ip>:3389 \\ 1 core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")", "oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}", "C:\\> net user <username> * 1", "oc adm node-logs -l kubernetes.io/os=windows --path= /ip-10-0-138-252.us-east-2.compute.internal containers /ip-10-0-138-252.us-east-2.compute.internal hybrid-overlay /ip-10-0-138-252.us-east-2.compute.internal kube-proxy /ip-10-0-138-252.us-east-2.compute.internal kubelet /ip-10-0-138-252.us-east-2.compute.internal pods", "oc adm node-logs -l kubernetes.io/os=windows --path=/kubelet/kubelet.log", "oc adm node-logs -l kubernetes.io/os=windows --path=journal", "oc adm node-logs -l kubernetes.io/os=windows --path=journal -u docker", "C:\\> powershell", "C:\\> Get-EventLog -LogName Application -Source Docker", "oc -n ns1 get service prometheus-example-app -o yaml", "labels: app: prometheus-example-app", "oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml", "apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app", "oc -n openshift-user-workload-monitoring get pods", "NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m", "oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator", "level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug", "oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-user-workload-monitoring get pods", "topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))", "topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))", "HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath={.status.ingress[].host})", "TOKEN=USD(oc whoami -t)", "curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/status/tsdb\"", "\"status\": \"success\",\"data\":{\"headStats\":{\"numSeries\":507473, \"numLabelPairs\":19832,\"chunkCount\":946298,\"minTime\":1712253600010, \"maxTime\":1712257935346},\"seriesCountByMetricName\": [{\"name\":\"etcd_request_duration_seconds_bucket\",\"value\":51840}, {\"name\":\"apiserver_request_sli_duration_seconds_bucket\",\"value\":47718},", "oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'cd /prometheus/;du -hs USD(ls -dt */ | grep -Eo \"[0-9|A-Z]{26}\")'", "308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B", "oc debug prometheus-k8s-0 -n openshift-monitoring -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'ls -latr /prometheus/ | egrep -o \"[0-9|A-Z]{26}\" | head -3 | while read BLOCK; do rm -r /prometheus/USDBLOCK; done'", "oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- df -h /prometheus/", "Starting pod/prometheus-k8s-0-debug-j82w4 Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod", "oc <command> --loglevel <log_level>", "oc whoami -t", "sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/support/index
Chapter 74. Openshift Builds
Chapter 74. Openshift Builds Since Camel 2.17 Only producer is supported The Openshift Builds component is one of the Kubernetes Components which provides a producer to execute Openshift builds operations. 74.1. Dependencies When using openshift-builds with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 74.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 74.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 74.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 74.3. Component Options The Openshift Builds component supports 3 options, which are listed below. Name Description Default Type kubernetesClient (producer) Autowired To use an existing kubernetes client. KubernetesClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 74.4. Endpoint Options The Openshift Builds endpoint is configured using URI syntax: with the following path and query parameters: 74.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (producer) Required Kubernetes Master url. String 74.4.2. Query Parameters (21 parameters) Name Description Default Type apiVersion (producer) The Kubernetes API Version to use. String dnsDomain (producer) The dns domain, used for ServiceCall EIP. String kubernetesClient (producer) Default KubernetesClient to use if provided. KubernetesClient namespace (producer) The namespace. String operation (producer) Producer operation to do on Kubernetes. String portName (producer) The port name, used for ServiceCall EIP. String portProtocol (producer) The port protocol, used for ServiceCall EIP. tcp String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 74.5. Message Headers The Openshift Builds component supports 4 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesBuildsLabels (producer) Constant: KUBERNETES_BUILDS_LABELS The Openshift build labels. Map CamelKubernetesBuildName (producer) Constant: KUBERNETES_BUILD_NAME The Openshift build name. String 74.6. Supported producer operation listBuilds listBuildsByLabels getBuild 74.7. Openshift Builds Producer Examples listBuilds: this operation list the Builds on an Openshift cluster. from("direct:list"). toF("openshift-builds:///?kubernetesClient=#kubernetesClient&operation=listBuilds"). to("mock:result"); This operation returns a List of Builds from your Openshift cluster. listBuildsByLabels: this operation list the builds by labels on an Openshift cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_BUILDS_LABELS, labels); } }); toF("openshift-builds:///?kubernetesClient=#kubernetesClient&operation=listBuildsByLabels"). to("mock:result"); This operation returns a List of Builds from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 74.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "openshift-builds:masterUrl", "from(\"direct:list\"). toF(\"openshift-builds:///?kubernetesClient=#kubernetesClient&operation=listBuilds\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_BUILDS_LABELS, labels); } }); toF(\"openshift-builds:///?kubernetesClient=#kubernetesClient&operation=listBuildsByLabels\"). to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-openshift-builds-component-starter
Chapter 2. Deploy OpenShift Data Foundation using local storage devices
Chapter 2. Deploy OpenShift Data Foundation using local storage devices Use this section to deploy OpenShift Data Foundation on IBM Power infrastructure where OpenShift Container Platform is already installed. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Perform the following steps to deploy OpenShift Data Foundation: Install Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Find available storage devices . Create OpenShift Data Foundation cluster on IBM Power . 2.1. Installing Local Storage Operator Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword... box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page: Update Channel as stable . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Approval Strategy as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment . Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in Managing and Allocating Storage Resources guide. Procedure Navigate in the left pane of the OpenShift Web Console to click Operators OperatorHub . Scroll or type a keyword into the Filter by keyword box to search for OpenShift Data Foundation Operator. Click Install on the OpenShift Data Foundation operator page. On the Install Operator page, the following required options are selected by default: Update Channel as stable-4.9 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps Verify that OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if OpenShift Data Foundation is available. 2.3. Finding available storage devices Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Data Foundation label cluster.ocs.openshift.io/openshift-storage='' before creating PVs for IBM Power. Procedure List and verify the name of the worker nodes with the OpenShift Data Foundation label. Example output: Log in to each worker node that is used for OpenShift Data Foundation resources and find the name of the additional disk that you have attached while deploying Openshift Container Platform. Example output: In this example, for worker-0, the available local devices of 500G are sda , sdc , sde , sdg , sdi , sdk , sdm , sdo . Repeat the above step for all the other worker nodes that have the storage devices to be used by OpenShift Data Foundation. See this Knowledge Base article for more details. 2.4. Creating OpenShift Data Foundation cluster on IBM Power Use this procedure to create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have a minimum of three worker nodes with the same storage type and size attached to each node (for example, 200 GB SSD) to use local storage devices on IBM Power. Verify your OpenShift Container Platform worker nodes are labeled for OpenShift Data Foundation: To identify storage devices on each node, refer to Finding available storage devices . Procedure Log into the OpenShift Web Console. In openshift-local-storage namespace Click Operators Installed Operators to view the installed operators. Click the Local Storage installed operator. On the Operator Details page, click the Local Volume link. Click Create Local Volume . Click on YAML view for configuring Local Volume. Define a LocalVolume custom resource for block PVs using the following YAML. The above definition selects sda local device from the worker-0 , worker-1 and worker-2 nodes. The localblock storage class is created and persistent volumes are provisioned from sda . Important Specify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths. Click Create . Confirm whether diskmaker-manager pods and Persistent Volumes are created. For Pods Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-local-storage from the Project drop-down list. Check if there are diskmaker-manager pods for each of the worker node that you used while creating LocalVolume CR. For Persistent Volumes Click Storage PersistentVolumes from the left pane of the OpenShift Web Console. Check the Persistent Volumes with the name local-pv-* . Number of Persistent Volumes will be equivalent to the product of number of worker nodes and number of storage devices provisioned while creating localVolume CR. Important The flexible scaling feature is enabled only when the storage cluster that you created with 3 or more nodes are spread across fewer than the minimum requirement of 3 availability zones. For information about flexible scaling, see Add capacity using YAML section in Scaling Storage guide. In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following:: Select the Use an existing StorageClass option. Select the required Storage Class that you used while installing LocalVolume. By default, it is set to none . Click . In the Capacity and nodes page, provide the necessary information:: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Choose one or both of the following Encryption level : Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Select Connect to an external key management service checkbox. This is optional for cluster-wide encryption. Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Provide CA Certificate , Client Certificate and Client Private Key by uploading the respective PEM encoded certificate file. Click Save . Select Default (SDN) as Multus is not yet supported on OpenShift Data Foundation on IBM Power. Click . In the Review and create page:: Review the configurations details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps: In the Web Console, click Home Search . Select the Resource as StorageCluster from the drop-down list. Click ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "oc get nodes -l cluster.ocs.openshift.io/openshift-storage=", "NAME STATUS ROLES AGE VERSION worker-0 Ready worker 2d11h v1.21.1+f36aa36 worker-1 Ready worker 2d11h v1.21.1+f36aa36 worker-2 Ready worker 2d11h v1.21.1+f36aa36", "oc debug node/<node name>", "oc debug node/worker-0 Starting pod/worker-0-debug To use host binaries, run `chroot /host` Pod IP: 192.168.0.63 If you don't see a command prompt, try pressing enter. sh-4.4# sh-4.4# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop1 7:1 0 500G 0 loop sda 8:0 0 500G 0 disk sdb 8:16 0 120G 0 disk |-sdb1 8:17 0 4M 0 part |-sdb3 8:19 0 384M 0 part `-sdb4 8:20 0 119.6G 0 part sdc 8:32 0 500G 0 disk sdd 8:48 0 120G 0 disk |-sdd1 8:49 0 4M 0 part |-sdd3 8:51 0 384M 0 part `-sdd4 8:52 0 119.6G 0 part sde 8:64 0 500G 0 disk sdf 8:80 0 120G 0 disk |-sdf1 8:81 0 4M 0 part |-sdf3 8:83 0 384M 0 part `-sdf4 8:84 0 119.6G 0 part sdg 8:96 0 500G 0 disk sdh 8:112 0 120G 0 disk |-sdh1 8:113 0 4M 0 part |-sdh3 8:115 0 384M 0 part `-sdh4 8:116 0 119.6G 0 part sdi 8:128 0 500G 0 disk sdj 8:144 0 120G 0 disk |-sdj1 8:145 0 4M 0 part |-sdj3 8:147 0 384M 0 part `-sdj4 8:148 0 119.6G 0 part sdk 8:160 0 500G 0 disk sdl 8:176 0 120G 0 disk |-sdl1 8:177 0 4M 0 part |-sdl3 8:179 0 384M 0 part `-sdl4 8:180 0 119.6G 0 part /sysroot sdm 8:192 0 500G 0 disk sdn 8:208 0 120G 0 disk |-sdn1 8:209 0 4M 0 part |-sdn3 8:211 0 384M 0 part /boot `-sdn4 8:212 0 119.6G 0 part sdo 8:224 0 500G 0 disk sdp 8:240 0 120G 0 disk |-sdp1 8:241 0 4M 0 part |-sdp3 8:243 0 384M 0 part `-sdp4 8:244 0 119.6G 0 part", "get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{\"\\n\"}'", "apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: Block", "spec: flexibleScaling: true [...] status: failureDomain: host" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_ibm_power/deploy-using-local-storage-devices-ibm-power
Chapter 90. OpenTelemetryTracing schema reference
Chapter 90. OpenTelemetryTracing schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec The type property is a discriminator that distinguishes use of the OpenTelemetryTracing type from JaegerTracing . It must have the value opentelemetry for the type OpenTelemetryTracing . Property Property type Description type string Must be opentelemetry .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-OpenTelemetryTracing-reference
Chapter 5. Uninstalling OpenShift Data Foundation
Chapter 5. Uninstalling OpenShift Data Foundation 5.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_on_any_platform/uninstalling_openshift_data_foundation
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in four versions: 8u, 11u, 17u, and 21u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.24/pr01
Chapter 107. Password schema reference
Chapter 107. Password schema reference Used in: KafkaUserScramSha512ClientAuthentication Property Property type Description valueFrom PasswordSource Secret from which the password should be read.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-Password-reference
Introduction to the OpenStack Dashboard
Introduction to the OpenStack Dashboard Red Hat OpenStack Platform 16.0 An overview of the OpenStack dashboard graphical user interface OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/introduction_to_the_openstack_dashboard/index
7.183. ricci
7.183. ricci 7.183.1. RHBA-2015:1405 - ricci bug fix and enhancement update Updated ricci packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The ricci packages contain a daemon and a client for remote configuring and managing of clusters. Bug Fixes BZ# 1187745 Previously, the luci application server and the ccs cluster configuration command in some cases displayed incorrect information about certain aspects of the cluster, such as the daemon status or specific management tasks. With this update, replies to clients' requests against service modules included with the ricci daemon are composed correctly again. As a result, luci and ccs now provide correct information about the cluster. BZ# 1079032 Previously, using the rgmanager utility to disable guest virtual machines (VMs) forced the guests off after 2 minutes. However, when Microsoft Windows guests download system upgrades, they install them during operating system (OS) shutdown. Consequently, if rgmanager forced the Windows guest off during this process, the guest OS could be damaged or destroyed. This update gives the server more time to shut down, and the guest OS can now safely install updates before the shutdown. BZ# 1156157 Prior to this update, the ricci daemon accepted deprecated and insecure SSLv2 connections, which could lead to security issues. With his update, SSLv2 connections are refused, thus fixing this bug. BZ# 1084991 Once authenticated, the ccs utility previously ignored any attempts to re-authenticate. Consequently, the user attempting to re-authenticate with a password did not get an error message even if they used an incorrect password. With this update, ccs verifies the password even if it is already authenticated by ricci, and if the password is not valid, ccs returns an error. BZ# 1125954 Prior to this update, the ccs utility did not properly ignore the SIGPIPE signal. When piping the output of ccs into another program, a traceback could occur if the other program closed the pipe before the ccs process was resolved. Now, ccs properly ignores SIGPIPE, and ccs no longer issues a traceback in the described situation. BZ# 1126872 Previously, the ccs utility did not properly handle comments in the cluster.conf file if they were located in the services section. As a consequence, tracebacks could occur in ccs when listing services. With this update, ccs ignores any comments in the services or resources sections of cluster.conf instead of trying to parse them, thus fixing this bug. BZ# 1166589 The ccs utility did not prevent multiple syncs or activations from executing in one ccs command. Consequently, it was possible to issue a command using multiple options that caused multiple syncs and activations. This update allows only one sync or activation per command, thus fixing this bug. Enhancement BZ# 1210679 The cluster schema in the ricci packages, used by the ccs utility for offline validation, has been updated. This update includes new options in resource and fence agents packages, and in the rgmanager utility and fenced cluster daemons. Users of ricci are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-ricci
Appendix D. Overview of the host columns
Appendix D. Overview of the host columns Below is the complete overview of columns that can be displayed in the host table divided into content categories. Some columns fall under more than one category. For more information on how to customize columns in the host table, see Section 2.17, "Selecting host columns" . General Power - Whether the host is turned on or off, if available Name - name of the host Operating system - operating system of the host Model - host hardware model (or compute resource in case of virtual hosts) Owner - user or group owning the host Host group - host group of the host Last report - time of the last host report Comment - comment given to host Content Name - name of the host Operating system - operating system of the host Subscription status - does the host have a valid subscription attached Installable updates - numbers of installable updates divided into four categories: security, bugfix, enhancement, total Lifecycle environment - lifecycle environment of the host Content view - content view of the host Registered - time when the host was registered to Satellite Last checkin - last time of the communication between the host and the Satellite Server Network IPv4 - IPv4 address of the host IPv6 - IPv6 address of the host MAC - MAC address of the host Reported data Sockets - number of host sockets Cores - number of host processor cores RAM - amount of memory Boot time - last boot time of the host Virtual - whether or not the host is recognized as a virtual machine Disks total space - total host storage space Kernel Version - Kernel version of the host operating system BIOS vendor - vendor of the host BIOS BIOS release date - release date of the host BIOS BIOS version - version of the host BIOS Puppet (only if the Puppet plugin is installed) Environment name - name of the Puppet environment of the host RH Cloud Recommendations - number of available recommendations for the host
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/overview-of-the-host-columns_managing-hosts
Chapter 9. Backing Up and Restoring Identity Management
Chapter 9. Backing Up and Restoring Identity Management Red Hat Enterprise Linux Identity Management provides a solution to manually back up and restore the IdM system, for example when a server stops performing correctly or data loss occurs. During backup, the system creates a directory containing information on your IdM setup and stores it. During restore, you can use this backup directory to bring your original IdM setup back. Important Use the backup and restore procedures described in this chapter only if you cannot rebuild the lost part of the IdM server group from the remaining servers in the deployment, by reinstalling the lost replicas as replicas of the remaining ones. The "Backup and Restore in IdM/IPA" Knowledgebase solution describes how to avoid losses by maintaining several server replicas. Rebuilding from an existing replica with the same data is preferable, because the backed-up version usually contains older, thus potentially outdated, information. The potential threat scenarios that backup and restore can prevent include: Catastrophic hardware failure on a machine occurs and the machine becomes incapable of further functioning. In this situation: Reinstall the operating system from scratch. Configure the machine with the same host name, fully qualified domain name (FQDN), and IP address. Install the IdM packages as well as all other optional packages relating to IdM that were present on the original system. Restore the full backup of the IdM server. An upgrade on an isolated machine fails. The operating system remains functional, but the IdM data is corrupted, which is why you want to restore the IdM system to a known good state. Important In cases of hardware or upgrade failure, such as the two mentioned above, restore from backup only if all replicas or a replica with a special role, such as the only certificate authority (CA), were lost. If a replica with the same data still exists, it is recommended to delete the lost replica and then rebuild it from the remaining one. Undesirable changes were made to the LDAP content, for example entries were deleted, and you want to revert them. Restoring backed-up LDAP data returns the LDAP entries to the state without affecting the IdM system itself. The restored server becomes the only source of information for IdM; other master servers are re-initialized from the restored server. Any data created after the last backup was made are lost. Therefore you should not use the backup and restore solution for normal system maintenance. If possible, always rebuild the lost server by reinstalling it as a replica. The backup and restore features can be managed only from the command line and are not available in the IdM web UI. 9.1. Full-Server Backup and Data-Only Backup IdM offers two backup options: Full-IdM server backup Full-server backup creates a backup copy of all the IdM server files as well as LDAP data, which makes it a standalone backup. IdM affects hundreds of files; the files that the backup process copies is a mix of whole directories and specific files, such as configuration files or log files, and relate directly to IdM or to various services that IdM depends on. Because the full-server backup is a raw file backup, it is performed offline. The script that performs the full-server backup stops all IdM services to ensure a safe course of the backup process. For the full list of files and directories that the full-server backup copies, see Section 9.1.3, "List of Directories and Files Copied During Backup" . Data-only Backup The data-only backup only creates a backup copy of LDAP data and the changelog. The process backs up the IPA-REALM instance and can also back up multiple back ends or only a single back end; the back ends include the IPA back end and the CA Dogtag back end. This type of backup also backs up a record of the LDAP content stored in LDIF (LDAP Data Interchange Format). The data-only backup can be performed both online and offline. By default, IdM stores the created backups in the /var/lib/ipa/backup/ directory. The naming conventions for the subdirectories containing the backups are: ipa-full-YEAR-MM-DD-HH-MM-SS in the GMT time zone for the full-server backup ipa-data-YEAR-MM-DD-HH-MM-SS in the GMT time zone for the data-only backup 9.1.1. Creating a Backup Both full-server and data-only backups are created using the ipa-backup utility which must always be run as root. To create a full-server backup, run ipa-backup . Important Performing a full-server backup stops all IdM services because the process must run offline. The IdM services will start again after the backup is finished. To create a data-only backup, run the ipa-backup --data command. You can add several additional options to ipa-backup : --online performs an online backup; this option is only available with data-only backups --logs includes the IdM service log files in the backup For further information on using ipa-backup , see the ipa-backup (1) man page. 9.1.1.1. Working Around Insufficient Space on Volumes Involved During Backup This section describes how to address problems if directories involved in the IdM backup process are stored on volumes with insufficient free space. Insufficient Space on the Volume That Contains /var/lib/ipa/backup/ If the /var/lib/ipa/backup/ directory is stored on a volume with insufficient free space, it is not possible to create a backup. To address the problem, use one of the following workarounds: Create a directory on a different volume and link it to /var/lib/ipa/backup/ . For example, if /home is stored on a different volume with enough free space: Create a directory, such as /home/idm/backup/ : Set the following permissions to the directory: If /var/lib/ipa/backup/ contains existing backups, move them to the new directory: Remove the /var/lib/ipa/backup/ directory: Create the /var/lib/ipa/backup/ link to the /home/idm/backup/ directory: Mount a directory stored on a different volume to /var/lib/ipa/backup/ . For example, if /home is stored on a different volume with enough free space, create /home/idm/backup/ and mount it to /var/lib/ipa/backup/ : Create the /home/idm/backup/ directory: Set the following permissions to the directory: If /var/lib/ipa/backup/ contains existing backups, move them to the new directory: Mount /home/idm/backup/ to /var/lib/ipa/backup/ : To automatically mount /home/idm/backup/ to /var/lib/ipa/backup/ when the system boots, append the following to the /etc/fstab file: Insufficient Space on the Volume That Contains /tmp If the backup fails due to insufficient space being available in the /tmp directory, change the location of the staged files to be created during the backup by using the TMPDIR environment variable: For more details, see the ipa-backup command fails to finish Knowledgebase solution. 9.1.2. Encrypting Backup You can encrypt the IdM backup using the GNU Privacy Guard (GPG) encryption. To create a GPG key: Create a keygen file containing the key details, for example, by running cat >keygen <<EOF and providing the required encryption details to the file from the command line: Generate a new key pair called backup and feed the contents of keygen to the command. The following example generates a key pair with the path names /root/backup.sec and /root/backup.pub : To create a GPG-encrypted backup, pass the generated backup key to ipa-backup by supplying the following options: --gpg , which instructs ipa-backup to perform the encrypted backup --gpg-keyring=GPG_KEYRING , which provides the full path to the GPG keyring without the file extension. For example: Note You might experience problems if your system uses the gpg2 utility to generate GPG keys because gpg2 requires an external program to function. To generate the key purely from console in this situation, add the pinentry-program /usr/bin/pinentry-curses line to the .gnupg/gpg-agent.conf file before generating a key. 9.1.3. List of Directories and Files Copied During Backup Directories: Files: Log files and directories:
[ "mkdir -p /home/idm/backup/", "chown root:root /home/idm/backup/ chmod 700 /home/idm/backup/", "mv /var/lib/ipa/backup/* /home/idm/backup/", "rm -rf /var/lib/ipa/backup/", "ln -s /home/idm/backup/ /var/lib/ipa/backup/", "mkdir -p /home/idm/backup/", "chown root:root /home/idm/backup/ chmod 700 /home/idm/backup/", "mv /var/lib/ipa/backup/* /home/idm/backup/", "mount -o bind /home/idm/backup/ /var/lib/ipa/backup/", "/home/idm/backup/ /var/lib/ipa/backup/ none bind 0 0", "TMPDIR= /path/to/backup ipa-backup", "cat >keygen <<EOF > %echo Generating a standard key > Key-Type: RSA > Key-Length:2048 > Name-Real: IPA Backup > Name-Comment: IPA Backup > Name-Email: [email protected] > Expire-Date: 0 > %pubring /root/backup.pub > %secring /root/backup.sec > %commit > %echo done > EOF", "gpg --batch --gen-key keygen gpg --no-default-keyring --secret-keyring /root/backup.sec --keyring /root/backup.pub --list-secret-keys", "ipa-backup --gpg --gpg-keyring=/root/backup", "/usr/share/ipa/html /root/.pki /etc/pki-ca /etc/pki/pki-tomcat /etc/sysconfig/pki /etc/httpd/alias /var/lib/pki /var/lib/pki-ca /var/lib/ipa/sysrestore /var/lib/ipa-client/sysrestore /var/lib/ipa/dnssec /var/lib/sss/pubconf/krb5.include.d/ /var/lib/authconfig/last /var/lib/certmonger /var/lib/ipa /var/run/dirsrv /var/lock/dirsrv", "/etc/named.conf /etc/named.keytab /etc/resolv.conf /etc/sysconfig/pki-ca /etc/sysconfig/pki-tomcat /etc/sysconfig/dirsrv /etc/sysconfig/ntpd /etc/sysconfig/krb5kdc /etc/sysconfig/pki/ca/pki-ca /etc/sysconfig/ipa-dnskeysyncd /etc/sysconfig/ipa-ods-exporter /etc/sysconfig/named /etc/sysconfig/ods /etc/sysconfig/authconfig /etc/ipa/nssdb/pwdfile.txt /etc/pki/ca-trust/source/ipa.p11-kit /etc/pki/ca-trust/source/anchors/ipa-ca.crt /etc/nsswitch.conf /etc/krb5.keytab /etc/sssd/sssd.conf /etc/openldap/ldap.conf /etc/security/limits.conf /etc/httpd/conf/password.conf /etc/httpd/conf/ipa.keytab /etc/httpd/conf.d/ipa-pki-proxy.conf /etc/httpd/conf.d/ipa-rewrite.conf /etc/httpd/conf.d/nss.conf /etc/httpd/conf.d/ipa.conf /etc/ssh/sshd_config /etc/ssh/ssh_config /etc/krb5.conf /etc/ipa/ca.crt /etc/ipa/default.conf /etc/dirsrv/ds.keytab /etc/ntp.conf /etc/samba/smb.conf /etc/samba/samba.keytab /root/ca-agent.p12 /root/cacert.p12 /var/kerberos/krb5kdc/kdc.conf /etc/systemd/system/multi-user.target.wants/ipa.service /etc/systemd/system/multi-user.target.wants/sssd.service /etc/systemd/system/multi-user.target.wants/certmonger.service /etc/systemd/system/pki-tomcatd.target.wants/[email protected] /var/run/ipa/services.list /etc/opendnssec/conf.xml /etc/opendnssec/kasp.xml /etc/ipa/dnssec/softhsm2.conf /etc/ipa/dnssec/softhsm_pin_so /etc/ipa/dnssec/ipa-ods-exporter.keytab /etc/ipa/dnssec/ipa-dnskeysyncd.keytab /etc/idm/nssdb/cert8.db /etc/idm/nssdb/key3.db /etc/idm/nssdb/secmod.db /etc/ipa/nssdb/cert8.db /etc/ipa/nssdb/key3.db /etc/ipa/nssdb/secmod.db", "/var/log/pki-ca /var/log/pki/ /var/log/dirsrv/slapd-PKI-IPA /var/log/httpd /var/log/ipaserver-install.log /var/log/kadmind.log /var/log/pki-ca-install.log /var/log/messages /var/log/ipaclient-install.log /var/log/secure /var/log/ipaserver-uninstall.log /var/log/pki-ca-uninstall.log /var/log/ipaclient-uninstall.log /var/named/data/named.run" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/backup-restore
Chapter 14. Asset overview
Chapter 14. Asset overview Business rules, process definition files, and other assets and resources created in Business Central are stored in the Artifact repository (Knowledge Store) that KIE Server accesses. The Artifact repository is a centralized repository for your business knowledge. It connects multiple GIT repositories so that you can access them from a single environment while storing different kinds of knowledge and artifacts in different locations. GIT is a distributed version control system and it implements revisions as commit objects. Every time you save your changes to a repository this creates a new commit object in the GIT repository. Similarly, the user can also copy an existing repository. This copying process is typically called cloning and the resulting repository can be referred to as clone. Every clone contains the full history of the collection of files and a cloned repository has the same content as the original repository. Business Central provides a web front-end that enables you to view and update the stored content. To access Artifact repository assets, go to Menu Design Projects in Business Central and click the project name.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/assets_con
Chapter 1. Monitoring overview
Chapter 1. Monitoring overview 1.1. About OpenShift Container Platform monitoring OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. You also have the option to enable monitoring for user-defined projects . A cluster administrator can configure the monitoring stack with the supported configurations. OpenShift Container Platform delivers monitoring best practices out of the box. A set of alerts are included by default that immediately notify administrators about issues with a cluster. Default dashboards in the OpenShift Container Platform web console include visual representations of cluster metrics to help you to quickly understand the state of your cluster. With the OpenShift Container Platform web console, you can view and manage metrics , alerts , and review monitoring dashboards . In the Observe section of OpenShift Container Platform web console, you can access and manage monitoring features such as metrics , alerts , monitoring dashboards , and metrics targets . After installing OpenShift Container Platform, cluster administrators can optionally enable monitoring for user-defined projects. By using this feature, cluster administrators, developers, and other users can specify how services and pods are monitored in their own projects. As a cluster administrator, you can find answers to common problems such as user metrics unavailability and high consumption of disk space by Prometheus in Troubleshooting monitoring issues . 1.2. Understanding the monitoring stack The OpenShift Container Platform monitoring stack is based on the Prometheus open source project and its wider ecosystem. The monitoring stack includes the following: Default platform monitoring components . A set of platform monitoring components are installed in the openshift-monitoring project by default during an OpenShift Container Platform installation. This provides monitoring for core OpenShift Container Platform components including Kubernetes services. The default monitoring stack also enables remote health monitoring for clusters. These components are illustrated in the Installed by default section in the following diagram. Components for monitoring user-defined projects . After optionally enabling monitoring for user-defined projects, additional monitoring components are installed in the openshift-user-workload-monitoring project. This provides monitoring for user-defined projects. These components are illustrated in the User section in the following diagram. 1.2.1. Default monitoring components By default, the OpenShift Container Platform 4.10 monitoring stack includes these components: Table 1.1. Default monitoring stack components Component Description Cluster Monitoring Operator The Cluster Monitoring Operator (CMO) is a central component of the monitoring stack. It deploys, manages, and automatically updates Prometheus and Alertmanager instances, Thanos Querier, Telemeter Client, and metrics targets. The CMO is deployed by the Cluster Version Operator (CVO). Prometheus Operator The Prometheus Operator (PO) in the openshift-monitoring project creates, configures, and manages platform Prometheus instances and Alertmanager instances. It also automatically generates monitoring target configurations based on Kubernetes label queries. Prometheus Prometheus is the monitoring system on which the OpenShift Container Platform monitoring stack is based. Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing. Prometheus Adapter The Prometheus Adapter (PA in the preceding diagram) translates Kubernetes node and pod queries for use in Prometheus. The resource metrics that are translated include CPU and memory utilization metrics. The Prometheus Adapter exposes the cluster resource metrics API for horizontal pod autoscaling. The Prometheus Adapter is also used by the oc adm top nodes and oc adm top pods commands. Alertmanager The Alertmanager service handles alerts received from Prometheus. Alertmanager is also responsible for sending the alerts to external notification systems. kube-state-metrics agent The kube-state-metrics exporter agent (KSM in the preceding diagram) converts Kubernetes objects to metrics that Prometheus can use. openshift-state-metrics agent The openshift-state-metrics exporter (OSM in the preceding diagram) expands upon kube-state-metrics by adding metrics for OpenShift Container Platform-specific resources. node-exporter agent The node-exporter agent (NE in the preceding diagram) collects metrics about every node in a cluster. The node-exporter agent is deployed on every node. Thanos Querier Thanos Querier aggregates and optionally deduplicates core OpenShift Container Platform metrics and metrics for user-defined projects under a single, multi-tenant interface. Grafana The Grafana analytics platform provides dashboards for analyzing and visualizing the metrics. The Grafana instance that is provided with the monitoring stack, along with its dashboards, is read-only. Telemeter Client Telemeter Client sends a subsection of the data from platform Prometheus instances to Red Hat to facilitate Remote Health Monitoring for clusters. All of the components in the monitoring stack are monitored by the stack and are automatically updated when OpenShift Container Platform is updated. Note All components of the monitoring stack use the TLS security profile settings that are centrally configured by a cluster administrator. If you configure a monitoring stack component that uses TLS security settings, the component uses the TLS security profile settings that already exist in the tlsSecurityProfile field in the global OpenShift Container Platform apiservers.config.openshift.io/cluster resource. 1.2.2. Default monitoring targets In addition to the components of the stack itself, the default monitoring stack monitors: CoreDNS Elasticsearch (if Logging is installed) etcd Fluentd (if Logging is installed) HAProxy Image registry Kubelets Kubernetes API server Kubernetes controller manager Kubernetes scheduler OpenShift API server OpenShift Controller Manager Operator Lifecycle Manager (OLM) Note Each OpenShift Container Platform component is responsible for its monitoring configuration. For problems with the monitoring of an OpenShift Container Platform component, open a Jira issue against that component, not against the general monitoring component. Other OpenShift Container Platform framework components might be exposing metrics as well. For details, see their respective documentation. 1.2.3. Components for monitoring user-defined projects OpenShift Container Platform 4.10 includes an optional enhancement to the monitoring stack that enables you to monitor services and pods in user-defined projects. This feature includes the following components: Table 1.2. Components for monitoring user-defined projects Component Description Prometheus Operator The Prometheus Operator (PO) in the openshift-user-workload-monitoring project creates, configures, and manages Prometheus and Thanos Ruler instances in the same project. Prometheus Prometheus is the monitoring system through which monitoring is provided for user-defined projects. Prometheus sends alerts to Alertmanager for processing. Thanos Ruler The Thanos Ruler is a rule evaluation engine for Prometheus that is deployed as a separate process. In OpenShift Container Platform 4.10, Thanos Ruler provides rule and alerting evaluation for the monitoring of user-defined projects. Note The components in the preceding table are deployed after monitoring is enabled for user-defined projects. All of the components in the monitoring stack are monitored by the stack and are automatically updated when OpenShift Container Platform is updated. 1.2.4. Monitoring targets for user-defined projects When monitoring is enabled for user-defined projects, you can monitor: Metrics provided through service endpoints in user-defined projects. Pods running in user-defined projects. 1.3. Glossary of common terms for OpenShift Container Platform monitoring This glossary defines common terms that are used in OpenShift Container Platform architecture. Alertmanager Alertmanager handles alerts received from Prometheus. Alertmanager is also responsible for sending the alerts to external notification systems. Alerting rules Alerting rules contain a set of conditions that outline a particular state within a cluster. Alerts are triggered when those conditions are true. An alerting rule can be assigned a severity that defines how the alerts are routed. Cluster Monitoring Operator The Cluster Monitoring Operator (CMO) is a central component of the monitoring stack. It deploys and manages Prometheus instances such as, the Thanos Querier, the Telemeter Client, and metrics targets to ensure that they are up to date. The CMO is deployed by the Cluster Version Operator (CVO). Cluster Version Operator The Cluster Version Operator (CVO) manages the lifecycle of cluster Operators, many of which are installed in OpenShift Container Platform by default. config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. Container A container is a lightweight and executable image that includes software and all its dependencies. Containers virtualize the operating system. As a result, you can run containers anywhere from a data center to a public or private cloud as well as a developer's laptop. custom resource (CR) A CR is an extension of the Kubernetes API. You can create custom resources. etcd etcd is the key-value store for OpenShift Container Platform, which stores the state of all resource objects. Fluentd Fluentd gathers logs from nodes and feeds them to Elasticsearch. Kubelets Runs on nodes and reads the container manifests. Ensures that the defined containers have started and are running. Kubernetes API server Kubernetes API server validates and configures data for the API objects. Kubernetes controller manager Kubernetes controller manager governs the state of the cluster. Kubernetes scheduler Kubernetes scheduler allocates pods to nodes. labels Labels are key-value pairs that you can use to organize and select subsets of objects such as a pod. node A worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. Operator The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers. Operator Lifecycle Manager (OLM) OLM helps you install, update, and manage the lifecycle of Kubernetes native applications. OLM is an open source toolkit designed to manage Operators in an effective, automated, and scalable way. Persistent storage Stores the data even after the device is shut down. Kubernetes uses persistent volumes to store the application data. Persistent volume claim (PVC) You can use a PVC to mount a PersistentVolume into a Pod. You can access the storage without knowing the details of the cloud environment. pod The pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. Prometheus Prometheus is the monitoring system on which the OpenShift Container Platform monitoring stack is based. Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing. Prometheus adapter The Prometheus Adapter translates Kubernetes node and pod queries for use in Prometheus. The resource metrics that are translated include CPU and memory utilization. The Prometheus Adapter exposes the cluster resource metrics API for horizontal pod autoscaling. Prometheus Operator The Prometheus Operator (PO) in the openshift-monitoring project creates, configures, and manages platform Prometheus and Alertmanager instances. It also automatically generates monitoring target configurations based on Kubernetes label queries. Silences A silence can be applied to an alert to prevent notifications from being sent when the conditions for an alert are true. You can mute an alert after the initial notification, while you work on resolving the underlying issue. storage OpenShift Container Platform supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. Thanos Ruler The Thanos Ruler is a rule evaluation engine for Prometheus that is deployed as a separate process. In OpenShift Container Platform, Thanos Ruler provides rule and alerting evaluation for the monitoring of user-defined projects. web console A user interface (UI) to manage OpenShift Container Platform. 1.4. Additional resources About remote health monitoring Granting users permission to monitor user-defined projects Configuring TLS security profiles 1.5. steps Configuring the monitoring stack
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/monitoring/monitoring-overview
Chapter 6. Premigration checklists
Chapter 6. Premigration checklists Before you migrate your application workloads with the Migration Toolkit for Containers (MTC), review the following checklists. 6.1. Cluster health checklist ❏ The clusters meet the minimum hardware requirements for the specific platform and installation method, for example, on bare metal . ❏ All MTC prerequisites are met. ❏ All nodes have an active OpenShift Container Platform subscription. ❏ You have verified node health . ❏ The identity provider is working. ❏ The migration network has a minimum throughput of 10 Gbps. ❏ The clusters have sufficient resources for migration. Note Clusters require additional memory, CPUs, and storage in order to run a migration on top of normal workloads. Actual resource requirements depend on the number of Kubernetes resources being migrated in a single migration plan. You must test migrations in a non-production environment in order to estimate the resource requirements. ❏ The etcd disk performance of the clusters has been checked with fio . 6.2. Source cluster checklist ❏ You have checked for persistent volumes (PVs) with abnormal configurations stuck in a Terminating state by running the following command: USD oc get pv ❏ You have checked for pods whose status is other than Running or Completed by running the following command: USD oc get pods --all-namespaces | egrep -v 'Running | Completed' ❏ You have checked for pods with a high restart count by running the following command: USD oc get pods --all-namespaces --field-selector=status.phase=Running \ -o json | jq '.items[]|select(any( .status.containerStatuses[]; \ .restartCount > 3))|.metadata.name' Even if the pods are in a Running state, a high restart count might indicate underlying problems. ❏ The cluster certificates are valid for the duration of the migration process. ❏ You have checked for pending certificate-signing requests by running the following command: USD oc get csr -A | grep pending -i ❏ The registry uses a recommended storage type . ❏ You can read and write images to the registry. ❏ The etcd cluster is healthy. ❏ The average API server response time on the source cluster is less than 50 ms. 6.3. Target cluster checklist ❏ The cluster has the correct network configuration and permissions to access external services, for example, databases, source code repositories, container image registries, and CI/CD tools. ❏ External applications and services that use services provided by the cluster have the correct network configuration and permissions to access the cluster. ❏ Internal container image dependencies are met. ❏ The target cluster and the replication repository have sufficient storage space.
[ "oc get pv", "oc get pods --all-namespaces | egrep -v 'Running | Completed'", "oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'", "oc get csr -A | grep pending -i" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/migration_toolkit_for_containers/premigration-checklists-mtc
Chapter 4. Cleaning Integration Test Suite (tempest) resources
Chapter 4. Cleaning Integration Test Suite (tempest) resources Before you validate your deployments by using OpenStack Integration Test Suite (tempest), run the cleanup command with the --init-saved-state flag. This command scans your environment to discover resources, for example networks, volumes, images, flavors, projects, and users. The discovered resources are saved in a file called saved_state.json . When the tempest cleanup command is executed all resources not recorded in the saved_state.json file are deleted. Prerequisites An OpenStack environment that contains the Integration Test Suite packages. An Integration Test Suite configuration that corresponds to your OpenStack environment. For more information, see Creating a workspace . One or more completed Integration Test Suite validation tests. 4.1. Performing a dry run Perform a dry run before you execute the cleanup. A dry run lists the files that Integration Test Suite would delete by a cleanup, without actually deleting any files. The dry_run.json file contains the list of files that a cleanup deletes. Procedure Complete the dry run: Review the dry_run.json file to ensure that the cleanup does not delete any files that you require for your environment. 4.2. Performing a tempest clean up Before you run any tempest tests, you must initialize the saved state. This creates the file saved_state.json , which prevents the cleanup from deleting objects that must be kept. Warning If you do not run the cleanup command with the --init-saved-state flag, RHOSP objects are deleted. If you create objects after running the cleanup command with --init-saved-state , those objects can be deleted by subsequent tempest commands. Procedure Initialize the saved state to create the saved_state.json file: Perform the cleanup: The tempest cleanup command deletes tempest resources but does not delete projects or the tempest administrator account. Note You can modify the saved_state.json file to include or exclude objects that you want to retain or remove.
[ "tempest cleanup --dry-run", "tempest cleanup --init-saved-state", "tempest cleanup" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/openstack_integration_test_suite_guide/assembly_cleaning-integration-test-suite-tempest-resources_tempest
13.3. Example Query
13.3. Example Query The following example has a query that retrieves all engineering employees born since 1970. Example 13.1. Example query SELECT e.title, e.lastname FROM Employees AS e JOIN Departments AS d ON e.dept_id = d.dept_id WHERE year(e.birthday) >= 1970 AND d.dept_name = 'Engineering' Logically, the data from the Employees and Departments tables are retrieved, then joined, then filtered as specified, and finally the output columns are projected. The canonical query plan thus looks like this: Figure 13.1. Canonical Query Plan Data flows from the tables at the bottom upwards through the join, through the select, and finally through the project to produce the final results. The data passed between each node is logically a result set with columns and rows. This is what happens logically , not how the plan is actually executed. Starting from this initial plan, the query planner performs transformations on the query plan tree to produce an equivalent plan that retrieves the same results faster. Both a federated query planner and a relational database planner deal with the same concepts and many of the same plan transformations. In this example, the criteria on the Departments and Employees tables will be pushed down the tree to filter the results as early as possible. In both cases, the goal is to retrieve the query results in the fastest possible time. However, the relational database planner does this primarily by optimizing the access paths in pulling data from storage. In contrast, a federated query planner is less concerned about storage access because it is typically pushing that burden to the data source. The most important consideration for a federated query planner is minimizing data transfer.
[ "SELECT e.title, e.lastname FROM Employees AS e JOIN Departments AS d ON e.dept_id = d.dept_id WHERE year(e.birthday) >= 1970 AND d.dept_name = 'Engineering'" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/example_query
Chapter 2. RHBA-2019:1077 VDSM 4.3 GA
Chapter 2. RHBA-2019:1077 VDSM 4.3 GA The bugs in this chapter are addressed by advisory RHBA-2019:1077. Further information about this advisory is available at https://access.redhat.com/errata/RHBA-2019:1077 . vdsm BZ# 1593568 BZ# 1111783 BZ# 1111784 BZ# 1561033 BZ# 1614430 BZ# 1617745 BZ# 1654417 BZ# 1655115 BZ# 1662449 BZ# 1510336 BZ# 1510856 BZ# 1511891 BZ# 1560460 BZ# 1575777 BZ# 1625591 BZ# 1636254 BZ# 1514004 BZ# 1585008 BZ# 1589612 BZ# 1647607
[ "Previously, if a CD-ROM was ejected from a virtual machine and VDSM was fenced or restarted, the virtual machine became unresponsive and/or the Manager reported its status as \"Unknown.\" In the current release, a virtual machine with an ejected CD-ROM recovers after restarting VDSM.", "In the current release, Windows clustering is supported for directly attached LUNs and shared disks.", "The current release supports Windows clustering for directly attached LUNs and shared disks.", "The current release adds AMD SMT-awareness to VDSM and RHV-M. This change helps meet the constraints of schedulers and software that are licensed per-core. It also improves cache coherency for VMs by presenting a more accurate view of the CPU topology. As a result, SMT works as expected on AMD CPUs.", "Vdsm-gluster tries to run heal operations on all volumes. Previously, if the gluster commands got stuck, VDSM started waiting indefinitely for them, exhausting threads, until it timed-out. Then it stopped communicating with the Manager and went offline. The current release adds a timeout to the gluster heal info command so the command terminates within a set timeout and threads do not become exhausted. On timeout, the system issues a GlusterCommandTimeoutException, which causes the command to exit and notifies the Manager. As a result, VDSM threads are not stuck, and VDSM does not go offline.", "Previously, when a migrating virtual machine was not properly set up on the destination host, it could still start there under certain circumstances, then run unnoticed and without VDSM supervision. This situation sometimes resulted in split-brain. Now migration is always prevented from starting if the virtual machine set up fails on the destination host.", "Previously, if an xlease volume was corrupted, VDSM could not acquire leases and features like high-availability virtual machines did not work. The current release adds rebuild-xleases and format-xleases commands to the VDSM tool. Administrators can use these commands to rebuild or format corrupted xlease volumes.", "The current release removes the VDSM daemon's support for cluster levels 3.6/4.0 and Red Hat Virtualization Manager 3.6/4.0. This means that VDSM from RHV 4.3 cannot be used with the Manager from RHV 3.6/4.0. To use the new version of VDSM, upgrade the Manager to version 4.1 or later.", "If a user with an invalid sudo configuration uses sudo to run commands, sudo appends a \"last login\" message to the command output. When this happens, VDSM fails to run lvm commands. Previously, the VDSM log did not contain helpful information about what caused those failures. The current release improves error handling in the VDSM code running lvm commands. Now, if VDSM fails, an error message clearly states that there was invalid output from the lvm commands, and shows the output added by sudo. Although this change does not fix the root cause, an invalid sudo configuration, it makes it easier to understand the issue.", "This release adds the ability to manage the MTU of VM networks in a centralized way, enabling oVirt to manage MTU all the way from the host network to the guest in the VM. This feature allows for the consistent use of MTUs in logical networks with small MTU (e.g., tunneled networks) and large MTU (e.g., jumbo frames) in VMs, even without DHCP.", "Making large snapshots and other abnormal events can pause virtual machines, impacting their system time, and other functions, such as timestamps. The current release provides Guest Time Synchronization, which, after a snapshot is created and the virtual machine is un-paused, uses VDSM and the guest agent to synchronize the system time of the virtual machine with that of the host. The time_sync_snapshot_enable option enables synchronization for snapshots. The time_sync_cont_enable option enables synchronization for abnormal events that may pause virtual machines. By default, these features are disabled for backward compatibility.", "Previously, copying volumes to preallocated disks was slower than necessary and did not make optimal use of available network resources. In the current release, qemu-img uses out-of-order writing to improve the speed of write operations by up to six times. These operations include importing, moving, and copying large disks to preallocated storage.", "Previously, VDSM used stat() to implement islink() checks when using ioprocess to run commands. As a result, if a user or storage system created a recursive symbolic link inside the ISO storage domain, VDSM failed to report file information. In the current release, VDSM uses lstat() to implement islink() so it can report file information from recursive symbolic links.", "Previously, a floppy drive in a virtual machine could prevent the virtual machine from being imported. In the current release, floppy drives are ignored during import.", "Previously, after importing and removing a Kernel-based Virtual Machine (KVM), trying to re-import the same virtual machine failed with a \"Job ID already exists\" error. The current release deletes completed import jobs from the VDSM. You can re-import a virtual machine without encountering the same error.", "VDSM uses lldpad. Due to a bug, lldpad confuses NetXtreme II BCM57810 FCoE-enabled cards. When the VDSM configuration enables lldpad to read lldp data from the card, it renders the card unusable. To work around this issue, set enable_lldp=false in vdsm.conf.d and restart VDSM. Check that lldpad is disabled on all relevant interfaces by entering the command, \"lldptool get-lldp -i USDifname adminStatus\". If lldp is enabled, disable it by entering \"lldptool set-lldp -i USDifname adminStatus=disabled\". After ensuring that lldp support is disabled in VDSM, networking should be unaffected.", "The TLSv1 and TLSv1.1 protocols are no longer secure. In the current release, they have been forcefully disabled in the VDSM configuration and cannot be enabled. Only TLSv1.2 and higher versions of the protocol are enabled. The exact version enabled depends on the underlying OpenSSL version.", "The current release adds a new 'ssl_ciphers' option to VDSM, which enables you to configure available ciphers for encrypted connections (for example, between the Manager and VDSM, or between VDSM and VDSM). The values this option uses conform to the OpenSSL standard. For more information, see https://access.redhat.com/articles/4056301", "When a virtual machine starts, VDSM uses the domain metadata section to store data which is required to configure a virtual machine but which is not adequately represented by the standard libvirt domain. Previously, VDSM stored drive IO tune settings in this metadata that were redundant because they already had proper representation in the libvirt domain. Furthermore, if IO tune settings were enabled, a bug in storing the IO tune settings prevented the virtual machine from starting. The current release removes the redundant information from the domain metadata and fixes the bug that prevented virtual machines from starting.", "Previously, an incorrectly named USB3 controller, \"qemu_xhci,\" prevented virtual machines from booting if they used a host passthrough with this controller. The current release corrects the controller name to \"qemu-xhci,\" which resolves the booting issue." ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_notes/rhba-20191077-vdsm
Chapter 2. Power Management Auditing And Analysis
Chapter 2. Power Management Auditing And Analysis 2.1. Audit And Analysis Overview The detailed manual audit, analysis, and tuning of a single system is usually the exception because the time and cost spent to do so typically outweighs the benefits gained from these last pieces of system tuning. However, performing these tasks once for a large number of nearly identical systems where you can reuse the same settings for all systems can be very useful. For example, consider the deployment of thousands of desktop systems, or a HPC cluster where the machines are nearly identical. Another reason to do auditing and analysis is to provide a basis for comparison against which you can identify regressions or changes in system behavior in the future. The results of this analysis can be very helpful in cases where hardware, BIOS, or software updates happen regularly and you want to avoid any surprises with regard to power consumption. Generally, a thorough audit and analysis gives you a much better idea of what is really happening on a particular system. Auditing and analyzing a system with regard to power consumption is relatively hard, even with the most modern systems available. Most systems do not provide the necessary means to measure power use via software. Exceptions exist though: the ILO management console of Hewlett Packard server systems has a power management module that you can access through the web. IBM provides a similar solution in their BladeCenter power management module. On some Dell systems, the IT Assistant offers power monitoring capabilities as well. Other vendors are likely to offer similar capabilities for their server platforms, but as can be seen there is no single solution available that is supported by all vendors. Direct measurements of power consumption is often only necessary to maximize savings as far as possible. Fortunately, other means are available to measure if changes are in effect or how the system is behaving. This chapter describes the necessary tools.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/audit_and_analysis
Chapter 3. Making Media
Chapter 3. Making Media This chapter describes how to use ISO image files obtained by following the steps in Chapter 2, Downloading Red Hat Enterprise Linux to create bootable physical media, such as a DVD or a USB flash drive. You can then use these media to boot the installation program and start the installation. These steps only apply if you plan to install Red Hat Enterprise Linux on a 64-bit AMD, Intel, or ARM system, or an IBM Power Systems server using physical boot media. For information about installing Red Hat Enterprise Linux on an IBM Z server, see Chapter 16, Booting the Installation on IBM Z . For instructions on how to set up a Preboot Execution Environment (PXE) server to perform a PXE-based installation over a network, see Chapter 24, Preparing for a Network Installation . Note By default, the inst.stage2= boot option is used on the installation media and set to a specific label (for example, inst.stage2=hd:LABEL=RHEL7\x20Server.x86_64 ). If you modify the default label of the file system containing the runtime image, or if using a customized procedure to boot the installation system, you must ensure this option is set to the correct value. See Specifying the Installation Source for details. 3.1. Making an Installation CD or DVD You can make an installation CD or DVD using burning software on your computer and a CD/DVD burner. The exact series of steps that produces an optical disc from an ISO image file varies greatly from computer to computer, depending on the operating system and disc burning software installed. Consult your burning software's documentation for the exact steps needed to burn a CD or DVD from an ISO image file. Note It is possible to use optical discs (CDs and DVDs) to create both minimal boot media and full installation media. However, it is important to note that due to the large size of the full installation ISO image (between 4 and 4.5 GB), only a DVD can be used to create a full installation disc. Minimal boot ISO is roughly 300 MB, allowing it to be burned to either a CD or a DVD. Make sure that your disc burning software is capable of burning discs from image files. Although this is true of most disc burning software, exceptions exist. In particular, note that the disc burning feature built into Windows XP and Windows Vista cannot burn DVDs; and that earlier Windows operating systems did not have any disc burning capability installed by default at all. Therefore, if your computer has a Windows operating system prior to Windows 7 installed on it, you need a separate piece of software for this task. Examples of popular disc burning software for Windows that you might already have on your computer include Nero Burning ROM and Roxio Creator . Most widely used disc burning software for Linux, such as Brasero and K3b , also has the built-in ability to burn discs from ISO image files. On some computers, the option to burn a disc from an ISO file is integrated into a context menu in the file browser. For example, when you right-click an ISO file on a computer with a Linux or UNIX operating system which runs the GNOME desktop, the Nautilus file browser presents you with the option to Write to disk .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/chap-making-media
23.8. Serving NTP Time With PTP
23.8. Serving NTP Time With PTP NTP to PTP synchronization in the opposite direction is also possible. When ntpd is used to synchronize the system clock, ptp4l can be configured with the priority1 option (or other clock options included in the best master clock algorithm) to be the grandmaster clock and distribute the time from the system clock via PTP : With hardware time stamping, phc2sys needs to be used to synchronize the PTP hardware clock to the system clock: To prevent quick changes in the PTP clock's frequency, the synchronization to the system clock can be loosened by using smaller P (proportional) and I (integral) constants of the PI servo:
[ "~]# cat /etc/ptp4l.conf [global] priority1 127 ptp4l -f /etc/ptp4l.conf", "~]# phc2sys -c eth3 -s CLOCK_REALTIME -w", "~]# phc2sys -c eth3 -s CLOCK_REALTIME -w -P 0.01 -I 0.0001" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s0-serving_ntp_time_with_ptp
31.3. Dual Head Display Settings
31.3. Dual Head Display Settings If multiple video cards are installed on the system, dual head monitor support is available and is configured via the Dual head tab, as shown in Figure 31.3, "Dual Head Display Settings" . Figure 31.3. Dual Head Display Settings To enable use of Dual head, check the Use dual head checkbox. To configure the second monitor type, click the corresponding Configure button. You can also configure the other Dual head settings by using the corresponding drop-down list. For the Desktop layout option, selecting Spanning Desktops allows both monitors to use an enlarged usable workspace. Selecting Individual Desktops shares the mouse and keyboard among the displays, but restricts windows to a single display.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-xconfig-dualhead
8.8 Release Notes
8.8 Release Notes Red Hat Enterprise Linux 8.8 Release Notes for Red Hat Enterprise Linux 8.8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.8_release_notes/index
4.6.2. REAL SERVER Subsection
4.6.2. REAL SERVER Subsection Clicking on the REAL SERVER subsection link at the top of the panel displays the EDIT REAL SERVER subsection. It displays the status of the physical server hosts for a particular virtual service. Figure 4.7. The REAL SERVER Subsection Click the ADD button to add a new server. To delete an existing server, select the radio button beside it and click the DELETE button. Click the EDIT button to load the EDIT REAL SERVER panel, as seen in Figure 4.8, "The REAL SERVER Configuration Panel" . Figure 4.8. The REAL SERVER Configuration Panel This panel consists of three entry fields: Name A descriptive name for the real server. Note This name is not the hostname for the machine, so make it descriptive and easily identifiable. Address The real server's IP address. Since the listening port is already specified for the associated virtual server, do not add a port number. Weight An integer value indicating this host's capacity relative to that of other hosts in the pool. The value can be arbitrary, but treat it as a ratio in relation to other real servers in the pool. For more on server weight, see Section 1.3.2, "Server Weight and Scheduling" . Warning Remember to click the ACCEPT button after making any changes in this panel. To make sure you do not lose any changes when selecting a new panel.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s2-piranha-virtservs-rs-VSA
Chapter 10. Configuring custom SSL/TLS certificates for Red Hat Quay on OpenShift Container Platform
Chapter 10. Configuring custom SSL/TLS certificates for Red Hat Quay on OpenShift Container Platform When Red Hat Quay is deployed on OpenShift Container Platform, the tls component of the QuayRegistry custom resource definition (CRD) is set to managed by default. As a result, OpenShift Container Platform's Certificate Authority is used to create HTTPS endpoints and to rotate SSL/TLS certificates. You can configure custom SSL/TLS certificates before or after the initial deployment of Red Hat Quay on OpenShift Container Platform. This process involves creating or updating the configBundleSecret resource within the QuayRegistry YAML file to integrate your custom certificates and setting the tls component to unmanaged . Important When configuring custom SSL/TLS certificates for Red Hat Quay, administrators are responsible for certificate rotation. The following procedures enable you to apply custom SSL/TLS certificates to ensure secure communication and meet specific security requirements for your Red Hat Quay on OpenShift Container Platform deployment. These steps assumed you have already created a Certificate Authority (CA) bundle or an ssl.key , and an ssl.cert . The procedure then shows you how to integrate those files into your Red Hat Quay on OpenShift Container Platform deployment, which ensures that your registry operates with the specified security settings and conforms to your organization's SSL/TLS policies. Note The following procedure is used for securing Red Hat Quay with an HTTPS certificate. Note that this differs from managing Certificate Authority Trust Bundles. CA Trust Bundles are used by system processes within the Quay container to verify certificates against trusted CAs, and ensure that services like LDAP, storage backend, and OIDC connections are trusted. If you are adding the certificates to an existing deployment, you must include the existing config.yaml file in the new config bundle secret, even if you are not making any configuration changes. 10.1. Creating a Certificate Authority Use the following procedure to set up your own CA and use it to issue a server certificate for your domain. This allows you to secure communications with SSL/TLS using your own certificates. Procedure Generate the root CA key by entering the following command: USD openssl genrsa -out rootCA.key 2048 Generate the root CA certificate by entering the following command: USD openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Generate the server key by entering the following command: USD openssl genrsa -out ssl.key 2048 Generate a signing request by entering the following command: USD openssl req -new -key ssl.key -out ssl.csr Enter the information that will be incorporated into your certificate request, including the server hostname, for example: Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []: Create a configuration file openssl.cnf , specifying the server hostname, for example: Example openssl.cnf file [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112 Use the configuration file to generate the certificate ssl.cert : USD openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf Confirm your created certificates and files by entering the following command: USD ls /path/to/certificates Example output rootCA.key ssl-bundle.cert ssl.key custom-ssl-config-bundle-secret.yaml rootCA.pem ssl.cert openssl.cnf rootCA.srl ssl.csr 10.2. Creating a custom SSL/TLS configBundleSecret resource After creating your custom SSL/TLS certificates, you can create a custom configBundleSecret resource for Red Hat Quay on OpenShift Container Platform, which allows you to upload ssl.cert and ssl.key files. Prerequisites You have base64 decoded the original config bundle into a config.yaml file. For more information, see Downloading the existing configuration . You have generated custom SSL certificates and keys. Procedure Create a new YAML file, for example, custom-ssl-config-bundle-secret.yaml : USD touch custom-ssl-config-bundle-secret.yaml Create the custom-ssl-config-bundle-secret resource. Create the resource by entering the following command: USD oc -n <namespace> create secret generic custom-ssl-config-bundle-secret \ --from-file=config.yaml=</path/to/config.yaml> \ 1 --from-file=ssl.cert=</path/to/ssl.cert> \ 2 --from-file=extra_ca_cert_<name-of-certificate>.crt=ca-certificate-bundle.crt \ 3 --from-file=ssl.key=</path/to/ssl.key> \ 4 --dry-run=client -o yaml > custom-ssl-config-bundle-secret.yaml 1 Where <config.yaml> is your base64 decoded config.yaml file. 2 Where <ssl.cert> is your ssl.cert file. 3 Optional. The --from-file=extra_ca_cert_<name-of-certificate>.crt=ca-certificate-bundle.crt field allows Red Hat Quay to recognize custom Certificate Authority (CA) files. If you are using LDAP, OIDC, or another service that uses custom CAs, you must add them via the extra_ca_cert path. For more information, see "Adding additional Certificate Authorities to Red Hat Quay on OpenShift Container Platform." 4 Where <ssl.key> is your ssl.key file. Optional. You can check the content of the custom-ssl-config-bundle-secret.yaml file by entering the following command: USD cat custom-ssl-config-bundle-secret.yaml Example output apiVersion: v1 data: config.yaml: QUxMT1dfUFVMTFNfV0lUSE9VVF9TVFJJQ1RfTE9HR0lORzogZmFsc2UKQVVUSEVOVElDQVRJT05fVFlQRTogRGF0YWJhc2UKREVGQVVMVF9UQUdfRVhQSVJBVElPTjogMncKRElTVFJJQlVURURfU1R... ssl.cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYakNDQTBhZ0F3SUJBZ0lVTUFBRk1YVWlWVHNoMGxNTWI3U1l0eFV5eTJjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2dZZ3hDekFKQmdOVkJBWVR... extra_ca_cert_<name-of-certificate>:LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYakNDQTBhZ0F3SUJBZ0lVTUFBRk1YVWlWVHNoMGxNTWI3U1l0eFV5eTJjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2dZZ3hDe... ssl.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2c0VWxZOVV1SVJPY1oKcFhpZk9MVEdqaS9neUxQMlpiMXQ... kind: Secret metadata: creationTimestamp: null name: custom-ssl-config-bundle-secret namespace: <namespace> Create the configBundleSecret resource by entering the following command: USD oc create -n <namespace> -f custom-ssl-config-bundle-secret.yaml Example output secret/custom-ssl-config-bundle-secret created Update the QuayRegistry YAML file to reference the custom-ssl-config-bundle-secret object by entering the following command: USD oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"configBundleSecret":"custom-ssl-config-bundle-secret"}}' Example output quayregistry.quay.redhat.com/example-registry patched Set the tls component of the QuayRegistry YAML to false by entering the following command: USD oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"components":[{"kind":"tls","managed":false}]}}' Example output quayregistry.quay.redhat.com/example-registry patched Ensure that your QuayRegistry YAML file has been updated to use the custom SSL configBundleSecret resource, and that your and tls resource is set to false by entering the following command: USD oc get quayregistry <registry_name> -n <namespace> -o yaml Example output # ... configBundleSecret: custom-ssl-config-bundle-secret # ... spec: components: - kind: tls managed: false # ... Verification Confirm a TLS connection to the server and port by entering the following command: USD openssl s_client -connect <quay-server.example.com>:443 Example output # ... SSL-Session: Protocol : TLSv1.3 Cipher : TLS_AES_256_GCM_SHA384 Session-ID: 0E995850DC3A8EB1A838E2FF06CE56DBA81BD8443E7FA05895FBD6FBDE9FE737 Session-ID-ctx: Resumption PSK: 1EA68F33C65A0F0FA2655BF9C1FE906152C6E3FEEE3AEB6B1B99BA7C41F06077989352C58E07CD2FBDC363FA8A542975 PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 7200 (seconds) # ... steps Red Hat Quay features
[ "openssl genrsa -out rootCA.key 2048", "openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com", "openssl genrsa -out ssl.key 2048", "openssl req -new -key ssl.key -out ssl.csr", "Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com Email Address []:", "[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = <quay-server.example.com> IP.1 = 192.168.1.112", "openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf", "ls /path/to/certificates", "rootCA.key ssl-bundle.cert ssl.key custom-ssl-config-bundle-secret.yaml rootCA.pem ssl.cert openssl.cnf rootCA.srl ssl.csr", "touch custom-ssl-config-bundle-secret.yaml", "oc -n <namespace> create secret generic custom-ssl-config-bundle-secret --from-file=config.yaml=</path/to/config.yaml> \\ 1 --from-file=ssl.cert=</path/to/ssl.cert> \\ 2 --from-file=extra_ca_cert_<name-of-certificate>.crt=ca-certificate-bundle.crt \\ 3 --from-file=ssl.key=</path/to/ssl.key> \\ 4 --dry-run=client -o yaml > custom-ssl-config-bundle-secret.yaml", "cat custom-ssl-config-bundle-secret.yaml", "apiVersion: v1 data: config.yaml: QUxMT1dfUFVMTFNfV0lUSE9VVF9TVFJJQ1RfTE9HR0lORzogZmFsc2UKQVVUSEVOVElDQVRJT05fVFlQRTogRGF0YWJhc2UKREVGQVVMVF9UQUdfRVhQSVJBVElPTjogMncKRElTVFJJQlVURURfU1R ssl.cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYakNDQTBhZ0F3SUJBZ0lVTUFBRk1YVWlWVHNoMGxNTWI3U1l0eFV5eTJjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2dZZ3hDekFKQmdOVkJBWVR extra_ca_cert_<name-of-certificate>:LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVYakNDQTBhZ0F3SUJBZ0lVTUFBRk1YVWlWVHNoMGxNTWI3U1l0eFV5eTJjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2dZZ3hDe ssl.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2c0VWxZOVV1SVJPY1oKcFhpZk9MVEdqaS9neUxQMlpiMXQ kind: Secret metadata: creationTimestamp: null name: custom-ssl-config-bundle-secret namespace: <namespace>", "oc create -n <namespace> -f custom-ssl-config-bundle-secret.yaml", "secret/custom-ssl-config-bundle-secret created", "oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{\"spec\":{\"configBundleSecret\":\"custom-ssl-config-bundle-secret\"}}'", "quayregistry.quay.redhat.com/example-registry patched", "oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{\"spec\":{\"components\":[{\"kind\":\"tls\",\"managed\":false}]}}'", "quayregistry.quay.redhat.com/example-registry patched", "oc get quayregistry <registry_name> -n <namespace> -o yaml", "configBundleSecret: custom-ssl-config-bundle-secret spec: components: - kind: tls managed: false", "openssl s_client -connect <quay-server.example.com>:443", "SSL-Session: Protocol : TLSv1.3 Cipher : TLS_AES_256_GCM_SHA384 Session-ID: 0E995850DC3A8EB1A838E2FF06CE56DBA81BD8443E7FA05895FBD6FBDE9FE737 Session-ID-ctx: Resumption PSK: 1EA68F33C65A0F0FA2655BF9C1FE906152C6E3FEEE3AEB6B1B99BA7C41F06077989352C58E07CD2FBDC363FA8A542975 PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 7200 (seconds)" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-custom-ssl-certs-config-bundle
Chapter 5. The Redfish modules in RHEL
Chapter 5. The Redfish modules in RHEL The Redfish modules for remote management of devices are now part of the redhat.rhel_mgmt Ansible collection. With the Redfish modules, you can easily use management automation on bare-metal servers and platform hardware by getting information about the servers or control them through an Out-Of-Band (OOB) controller, using the standard HTTPS transport and JSON format. 5.1. The Redfish modules The redhat.rhel_mgmt Ansible collection provides the Redfish modules to support hardware management in Ansible over Redfish. The redhat.rhel_mgmt collection is available in the ansible-collection-redhat-rhel_mgmt package. To install it, see Installing the redhat.rhel_mgmt Collection using the CLI . The following Redfish modules are available in the redhat.rhel_mgmt collection: redfish_info : The redfish_info module retrieves information about the remote Out-Of-Band (OOB) controller such as systems inventory. redfish_command : The redfish_command module performs Out-Of-Band (OOB) controller operations like log management and user management, and power operations such as system restart, power on and off. redfish_config : The redfish_config module performs OOB controller operations such as changing OOB configuration, or setting the BIOS configuration. 5.2. Redfish modules parameters The parameters used for the Redfish modules are: redfish_info parameters: Description baseuri (Mandatory) - Base URI of OOB controller. category (Mandatory) - List of categories to execute on OOB controller. The default value is ["Systems"]. command (Mandatory) - List of commands to execute on OOB controller. username Username for authentication to OOB controller. password Password for authentication to OOB controller. redfish_command parameters: Description baseuri (Mandatory) - Base URI of OOB controller. category (Mandatory) - List of categories to execute on OOB controller. The default value is ["Systems"]. command (Mandatory) - List of commands to execute on OOB controller. username Username for authentication to OOB controller. password Password for authentication to OOB controller. redfish_config parameters: Description baseuri (Mandatory) - Base URI of OOB controller. category (Mandatory) - List of categories to execute on OOB controller. The default value is ["Systems"]. command (Mandatory) - List of commands to execute on OOB controller. username Username for authentication to OOB controller. password Password for authentication to OOB controller. bios_attributes BIOS attributes to update. 5.3. Using the redfish_info module The following example shows how to use the redfish_info module in a playbook to get information about the CPU inventory. For simplicity, the example uses the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed. Prerequisites The redhat.rhel_mgmt collection is installed. The pyghmi library in the python3-pyghmi package is installed on the managed host. If you use localhost as the managed host, install the python3-pyghmi package on the host where you execute the playbook. OOB controller access details. Procedure Create a new playbook.yml file with the following content: --- - name: Get CPU inventory hosts: localhost tasks: - redhat.rhel_mgmt.redfish_info: baseuri: "{{ baseuri }}" username: "{{ username }}" password: "{{ password }}" category: Systems command: GetCpuInventory register: result Execute the playbook against localhost: As a result, the output returns the CPU inventory details. 5.4. Using the redfish_command module The following example shows how to use the redfish_command module in a playbook to turn on a system. For simplicity, the example uses the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed. Prerequisites The redhat.rhel_mgmt collection is installed. The pyghmi library in the python3-pyghmi package is installed on the managed host. If you use localhost as the managed host, install the python3-pyghmi package on the host where you execute the playbook. OOB controller access details. Procedure Create a new playbook.yml file with the following content: --- - name: Power on system hosts: localhost tasks: - redhat.rhel_mgmt.redfish_command: baseuri: "{{ baseuri }}" username: "{{ username }}" password: "{{ password }}" category: Systems command: PowerOn Execute the playbook against localhost: As a result, the system powers on. 5.5. Using the redfish_config module The following example shows how to use the redfish_config module in a playbook to configure a system to boot with UEFI. For simplicity, the example uses the same host as the Ansible control host and managed host, thus executing the modules on the same host where the playbook is executed. Prerequisites The redhat.rhel_mgmt collection is installed. The pyghmi library in the python3-pyghmi package is installed on the managed host. If you use localhost as the managed host, install the python3-pyghmi package on the host where you execute the playbook. OOB controller access details. Procedure Create a new playbook.yml file with the following content: --- - name: "Set BootMode to UEFI" hosts: localhost tasks: - redhat.rhel_mgmt.redfish_config: baseuri: "{{ baseuri }}" username: "{{ username }}" password: "{{ password }}" category: Systems command: SetBiosAttributes bios_attributes: BootMode: Uefi Execute the playbook against localhost: As a result, the system boot mode is set to UEFI.
[ "--- - name: Get CPU inventory hosts: localhost tasks: - redhat.rhel_mgmt.redfish_info: baseuri: \"{{ baseuri }}\" username: \"{{ username }}\" password: \"{{ password }}\" category: Systems command: GetCpuInventory register: result", "ansible-playbook playbook.yml", "--- - name: Power on system hosts: localhost tasks: - redhat.rhel_mgmt.redfish_command: baseuri: \"{{ baseuri }}\" username: \"{{ username }}\" password: \"{{ password }}\" category: Systems command: PowerOn", "ansible-playbook playbook.yml", "--- - name: \"Set BootMode to UEFI\" hosts: localhost tasks: - redhat.rhel_mgmt.redfish_config: baseuri: \"{{ baseuri }}\" username: \"{{ username }}\" password: \"{{ password }}\" category: Systems command: SetBiosAttributes bios_attributes: BootMode: Uefi", "ansible-playbook playbook.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/assembly_the-redfish-modules-in-rhel_automating-system-administration-by-using-rhel-system-roles
Chapter 8. Ceph debugging and logging configuration
Chapter 8. Ceph debugging and logging configuration As a storage administrator, you can increase the amount of debugging and logging information to help diagnose problems with the Red Hat Ceph Storage. 8.1. Prerequisites Installation of the Red Hat Ceph Storage software. 8.2. Ceph debugging and logging Debug settings are NOT required in the Ceph configuration file, but can be added to optimize logging. Changes to the Ceph logging configuration usually occur at runtime when a problem occurs, but may also be modified in the Ceph configuration file. For example, if there are issues when starting the cluster, consider increasing log settings in the Ceph configuration file. When the problem is resolved, remove the settings or restore them to optimal settings for runtime operations. By default, view Ceph log files under /var/log/ceph . Tip When debug output slows down the cluster, the latency can hide race conditions. Logging is resource intensive. If there is a problem in a specific area of the cluster, enable logging for that area of the cluster. For example, if OSDs are running fine but Ceph Object Gateways are not, start by enabling debug logging for the specific gateway instances encountering problems. Increase or decrease logging for each subsystem as needed. Important Verbose logging can generate over 1GB of data per hour. If the OS disk reaches its capacity, the node will stop working. If Ceph logging is enabled or the rate of logging increased, ensure that the OS disk has sufficient capacity. When the cluster is running well, remove unnecessary debugging settings to ensure the cluster runs optimally. Logging debug output messages is relatively slow, and a waste of resources when operating your cluster. 8.3. Additional Resources See all the Red Hat Ceph Storage Ceph debugging and logging configuration options in Appendix J for specific option descriptions and usage.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/configuration_guide/ceph-debugging-and-logging-configuration
2.6. TCP Wrappers and xinetd
2.6. TCP Wrappers and xinetd Controlling access to network services is one of the most important security tasks facing a server administrator. Red Hat Enterprise Linux provides several tools for this purpose. For example, an iptables -based firewall filters out unwelcome network packets within the kernel's network stack. For network services that utilize it, TCP Wrappers add an additional layer of protection by defining which hosts are or are not allowed to connect to " wrapped " network services. One such wrapped network service is the xinetd super server . This service is called a super server because it controls connections to a subset of network services and further refines access control. Figure 2.4, "Access Control to Network Services" is a basic illustration of how these tools work together to protect network services. Figure 2.4. Access Control to Network Services For more information about using firewalls with iptables , see Section 2.8.9, "IPTables" . 2.6.1. TCP Wrappers The TCP Wrappers packages ( tcp_wrappers and tcp_wrappers-libs ) are installed by default and provide host-based access control to network services. The most important component within the package is the /lib/libwrap.so or /lib64/libwrap.so library. In general terms, a TCP-wrapped service is one that has been compiled against the libwrap.so library. When a connection attempt is made to a TCP-wrapped service, the service first references the host's access files ( /etc/hosts.allow and /etc/hosts.deny ) to determine whether or not the client is allowed to connect. In most cases, it then uses the syslog daemon ( syslogd ) to write the name of the requesting client and the requested service to /var/log/secure or /var/log/messages . If a client is allowed to connect, TCP Wrappers release control of the connection to the requested service and take no further part in the communication between the client and the server. In addition to access control and logging, TCP Wrappers can execute commands to interact with the client before denying or releasing control of the connection to the requested network service. Because TCP Wrappers are a valuable addition to any server administrator's arsenal of security tools, most network services within Red Hat Enterprise Linux are linked to the libwrap.so library. Such applications include /usr/sbin/sshd , /usr/sbin/sendmail , and /usr/sbin/xinetd . Note To determine if a network service binary is linked to libwrap.so , type the following command as the root user: ldd <binary-name> | grep libwrap Replace <binary-name> with the name of the network service binary. If the command returns straight to the prompt with no output, then the network service is not linked to libwrap.so . The following example indicates that /usr/sbin/sshd is linked to libwrap.so : 2.6.1.1. Advantages of TCP Wrappers TCP Wrappers provide the following advantages over other network service control techniques: Transparency to both the client and the wrapped network service - Both the connecting client and the wrapped network service are unaware that TCP Wrappers are in use. Legitimate users are logged and connected to the requested service while connections from banned clients fail. Centralized management of multiple protocols - TCP Wrappers operate separately from the network services they protect, allowing many server applications to share a common set of access control configuration files, making for simpler management.
[ "~]# ldd /usr/sbin/sshd | grep libwrap libwrap.so.0 => /lib/libwrap.so.0 (0x00655000)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-Security_Guide-TCP_Wrappers_and_xinetd
5.3. Resizing an ext4 File System
5.3. Resizing an ext4 File System Before growing an ext4 file system, ensure that the underlying block device is of an appropriate size to hold the file system later. Use the appropriate resizing methods for the affected block device. An ext4 file system may be grown while mounted using the resize2fs command: The resize2fs command can also decrease the size of an unmounted ext4 file system: When resizing an ext4 file system, the resize2fs utility reads the size in units of file system block size, unless a suffix indicating a specific unit is used. The following suffixes indicate specific units: s - 512 byte sectors K - kilobytes M - megabytes G - gigabytes Note The size parameter is optional (and often redundant) when expanding. The resize2fs automatically expands to fill all available space of the container, usually a logical volume or partition. For more information about resizing an ext4 file system, refer to man resize2fs .
[ "resize2fs /mount/device size", "resize2fs /dev/ device size" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ext4grow
Chapter 9. Configure maximum transmission unit (MTU) settings
Chapter 9. Configure maximum transmission unit (MTU) settings 9.1. MTU overview OpenStack Networking can calculate the largest possible maximum transmission unit (MTU) size that you can apply safely to instances. The MTU value specifies the maximum amount of data that a single network packet can transfer; this number is variable depending on the most appropriate size for the application. For example, NFS shares might require a different MTU size to that of a VoIP application. Note You can use the neutron net-show command to view the largest possible MTU values that OpenStack Networking calculates. net-mtu is a neutron API extension that is not present in some implementations. The MTU value that you require can be advertised to DHCPv4 clients for automatic configuration, if supported by the instance, as well as to IPv6 clients through Router Advertisement (RA) packets. To send Router Advertisements, the network must be attached to a router. You must configure MTU settings consistently from end-to-end. This means that the MTU setting must be the same at every point the packet passes through, including the VM, the virtual network infrastructure, the physical network, and the destination server. For example, the circles in the following diagram indicate the various points where an MTU value must be adjusted for traffic between an instance and a physical server. You must change the MTU value for very interface that handles network traffic to accommodate packets of a particular MTU size. This is necessary if traffic travels from the instance 192.168.200.15 through to the physical server 10.20.15.25 : Inconsistent MTU values can result in several network issues, the most common being random packet loss that results in connection drops and slow network performance. Such issues are problematic to troubleshoot because you must identify and examine every possible network point to ensure it has the correct MTU value. 9.2. Configuring MTU Settings in Director This example demonstrates how to set the MTU using the NIC config templates. You must set the MTU on the bridge, bond (if applicable), interface(s), and VLAN(s): 9.3. Reviewing the resulting MTU calculation You can view the calculated MTU value, which is the largest possible MTU value that instances can use. Use this calculated MTU value to configure all interfaces involved in the path of network traffic.
[ "- type: ovs_bridge name: br-isolated use_dhcp: false mtu: 9000 # <--- Set MTU members: - type: ovs_bond name: bond1 mtu: 9000 # <--- Set MTU ovs_options: {get_param: BondInterfaceOvsOptions} members: - type: interface name: ens15f0 mtu: 9000 # <--- Set MTU primary: true - type: interface name: enp131s0f0 mtu: 9000 # <--- Set MTU - type: vlan device: bond1 vlan_id: {get_param: InternalApiNetworkVlanID} mtu: 9000 # <--- Set MTU addresses: - ip_netmask: {get_param: InternalApiIpSubnet} - type: vlan device: bond1 mtu: 9000 # <--- Set MTU vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet}", "openstack network show <network>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/networking_guide/sec-mtu
6.4. Hardware Security Module
6.4. Hardware Security Module To use a Hardware Security Module (HSM), a Federal Information Processing Standard (FIPS) 140-2 validated HSM is required. See your HSM documentation for installing, configuring, and how to set up the HSM in FIPS mode. 6.4.1. Setting up SELinux for an HSM Certain HSMs require that you manually update SELinux settings before you can install Certificate System. The following section describes the required actions for supported HSMs: nCipher nShield After you installed the HSM and before you start installing Certificate System: Reset the context of files in the /opt/nfast/ directory: Restart the nfast software. Thales Luna HSM No SELinux-related actions are required before you start installing Certificate System. For details about the supported HSMs, see Section 4.4, "Supported Hardware Security Modules" . 6.4.2. Enabling FIPS Mode on an HSM To enable FIPS Mode on HSMs, please refer to your HSM vendor's documentation for specific instructions. Important nCipher HSM On a nCipher HSM, the FIPS mode can only be enabled when generating the Security World, this cannot be changed afterwards. While there is a variety of ways to generate the Security World, the preferred method is always to use the new-world command. For guidance on how to generate a FIPS-compliant Security World, please follow the nCipher HSM vendor's documentation. LunaSA HSM Similarly, enabling the FIPS mode on a Luna HSM must be done during the initial configuration, since changing this policy zeroizes the HSM as a security measure. For details, please refer to the Luna HSM vendor's documentation. 6.4.3. Verifying if FIPS Mode is Enabled on an HSM This section describes how to verify if FIPS mode is enabled for certain HSMs. For other HSMs, see the hardware manufacturer's documentation. 6.4.3.1. Verifying if FIPS Mode is Enabled on an nCipher HSM Note Please refer to your HSM vendor's documentation for the complete procedure. To verify if the FIPS mode is enabled on an nCipher HSM, enter: With older versions of the software, if the StrictFIPS140 is listed in the state flag, the FIPS mode is enabled. In newer vesions, it is however better to check the new mode line and look for fips1402level3 . In all cases, there should also be an hkfips key present in the nfkminfo output. 6.4.3.2. Verifying if FIPS Mode is Enabled on a Luna SA HSM Note Please refer to your HSM vendor's documentation for the complete procedure. To verify if the FIPS mode is enabled on a Luna SA HSM: Open the lunash management console Use the hsm show command and verify that the output contains the text The HSM is in FIPS 140-2 approved operation mode. : 6.4.4. Preparing for Installing Certificate System with an HSM In Section 7.3, "Understanding the pkispawn Utility" , you are instructed to use the following parameters in the configuration file you pass to the pkispawn utility when installing Certificate System with an HSM: The values of the pki_hsm_libfile and pki_token_name parameter depend on your specific HSM installation. These values allow the pkispawn utility to set up your HSM and enable Certificate System to connect to it. The value of the pki_token_password depends upon your particular HSM token's password. The password gives the pkispawn utility read and write permissions to create new keys on the HSM. The value of the pki_hsm_modulename is a name used in later pkispawn operations to identify the HSM. The string is an identifier you can set as whatever you like. It allows pkispawn and Certificate System to refer to the HSM and configuration information by name in later operations. The following section provides settings for individual HSMs. If your HSM is not listed, consult your HSM manufacturer's documentation. 6.4.4.1. nCipher HSM Parameters For a nCipher HSM, set the following parameters: Note that you can set the value of pki_hsm_modulename to any value. The above is a suggested value. Example 6.1. Identifying the Token Name To identify the token name, run the following command as the root user: The value of the name field in the Cardset section lists the token name. Set the token name as follows: 6.4.4.2. SafeNet / Luna SA HSM Parameters For a SafeNet / Luna SA HSM, such as a SafeNet Luna Network HSM, specify the following parameters: Note that you can set the value of pki_hsm_modulename to any value. The above is a suggested value. Example 6.2. Identifying the Token Name To identify the token name, run the following command as the root user: The value in the label column lists the token name. Set the token name as follows: 6.4.5. Backing up Keys on Hardware Security Modules It is not possible to export keys and certificates stored on an HSM to a .p12 file. If such an instance is to be backed-up, contact the manufacturer of your HSM for support.
[ "restorecon -R /opt/nfast/", "/opt/nfast/sbin/init.d-ncipher restart", "/opt/nfast/bin/nfkminfo", "lunash:> hsm show FIPS 140-2 Operation: ===================== The HSM is in FIPS 140-2 approved operation mode.", "######################### Provide HSM parameters # ########################## pki_hsm_enable=True pki_hsm_libfile= hsm_libfile pki_hsm_modulename= hsm_modulename pki_token_name= hsm_token_name pki_token_password= pki_token_password ######################################## Provide PKI-specific HSM token names # ######################################## pki_audit_signing_token= hsm_token_name pki_ssl_server_token= hsm_token_name pki_subsystem_token= hsm_token_name", "pki_hsm_libfile=/opt/nfast/toolkits/pkcs11/libcknfast.so pki_hsm_modulename=nfast", "/opt/nfast/bin/nfkminfo World generation 2 ...~snip~ Cardset name \" NHSM-CONN-XC \" k-out-of-n 1/4 flags NotPersistent PINRecoveryRequired(enabled) !RemoteEnabled timeout none ...~snip~", "pki_token_name=NHSM-CONN-XC", "pki_hsm_libfile=/usr/safenet/lunaclient/lib/libCryptoki2_64.so pki_hsm_modulename=lunasa", "/usr/safenet/lunaclient/bin/vtl verify The following Luna SA Slots/Partitions were found: Slot Serial # Label ==== ================ ===== 0 1209461834772 lunasaQE", "pki_token_name=lunasaQE" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/hardware_security_module
Chapter 6. Managing user sessions
Chapter 6. Managing user sessions When users log into realms, Red Hat build of Keycloak maintains a user session for each user and remembers each client visited by the user within the session. Realm administrators can perform multiple actions on each user session: View login statistics for the realm. View active users and where they logged in. Log a user out of their session. Revoke tokens. Set up token timeouts. Set up session timeouts. 6.1. Administering sessions To see a top-level view of the active clients and sessions in Red Hat build of Keycloak, click Sessions from the menu. Sessions 6.1.1. Signing out all active sessions You can sign out all users in the realm. From the Action list, select Sign out all active sessions . All SSO cookies become invalid. Red Hat build of Keycloak notifies clients by using the Red Hat build of Keycloak OIDC client adapter of the logout event. Clients requesting authentication within active browser sessions must log in again. Client types such as SAML do not receive a back-channel logout request. Note Clicking Sign out all active sessions does not revoke outstanding access tokens. Outstanding tokens must expire naturally. For clients using the Red Hat build of Keycloak OIDC client adapter, you can push a revocation policy to revoke the token, but this does not work for other adapters. 6.1.2. Viewing client sessions Procedure Click Clients in the menu. Click a client to see that client's sessions. Click the Sessions tab. Client sessions 6.1.3. Viewing user sessions Procedure Click Users in the menu. Click a user to see that user's sessions. Click the Sessions tab. User sessions 6.2. Revoking active sessions If your system is compromised, you can revoke all active sessions and access tokens. Procedure Click Sessions in the menu. From the Actions list, select Revocation . Revocation Specify a time and date where sessions or tokens issued before that time and date are invalid using this console. Click Set to now to set the policy to the current time and date. Click Push to push this revocation policy to any registered OIDC client with the Red Hat build of Keycloak OIDC client adapter. 6.3. Session and token timeouts Red Hat build of Keycloak includes control of the session, cookie, and token timeouts through the Sessions and Tokens tabs in the Realm settings menu. Sessions tab Configuration Description SSO Session Idle This setting is for OIDC clients only. If a user is inactive for longer than this timeout, the user session is invalidated. This timeout value resets when clients request authentication or send a refresh token request. Red Hat build of Keycloak adds a window of time to the idle timeout before the session invalidation takes effect. See the note later in this section. SSO Session Max The maximum time before a user session expires. SSO Session Idle Remember Me This setting is similar to the standard SSO Session Idle configuration but specific to logins with Remember Me enabled. Users can specify longer session idle timeouts when they click Remember Me when logging in. This setting is an optional configuration and, if its value is not greater than zero, it uses the same idle timeout as the SSO Session Idle configuration. SSO Session Max Remember Me This setting is similar to the standard SSO Session Max but specific to Remember Me logins. Users can specify longer sessions when they click Remember Me when logging in. This setting is an optional configuration and, if its value is not greater than zero, it uses the same session lifespan as the SSO Session Max configuration. Client Session Idle Idle timeout for the client session. If the user is inactive for longer than this timeout, the client session is invalidated and the refresh token requests bump the idle timeout. This setting never affects the general SSO user session, which is unique. Note the SSO user session is the parent of zero or more client sessions, one client session is created for every different client app the user logs in. This value should specify a shorter idle timeout than the SSO Session Idle . Users can override it for individual clients in the Advanced Settings client tab. This setting is an optional configuration and, when set to zero, uses the same idle timeout in the SSO Session Idle configuration. Client Session Max The maximum time for a client session and before a refresh token expires and invalidates. As in the option, this setting never affects the SSO user session and should specify a shorter value than the SSO Session Max . Users can override it for individual clients in the Advanced Settings client tab. This setting is an optional configuration and, when set to zero, uses the same max timeout in the SSO Session Max configuration. Offline Session Idle This setting is for offline access . The amount of time the session remains idle before Red Hat build of Keycloak revokes its offline token. Red Hat build of Keycloak adds a window of time to the idle timeout before the session invalidation takes effect. See the note later in this section. Offline Session Max Limited This setting is for offline access . If this flag is Enabled , Offline Session Max can control the maximum time the offline token remains active, regardless of user activity. If the flag is Disabled , offline sessions never expire by lifespan, only by idle. Once this option is activated, the Offline Session Max (global option at realm level) and Client Offline Session Max (specific client level option in the Advanced Settings tab) can be configured. Offline Session Max This setting is for offline access , and it is the maximum time before Red Hat build of Keycloak revokes the corresponding offline token. This option controls the maximum amount of time the offline token remains active, regardless of user activity. Login timeout The total time a logging in must take. If authentication takes longer than this time, the user must start the authentication process again. Login action timeout The Maximum time users can spend on any one page during the authentication process. Tokens tab Configuration Description Default Signature Algorithm The default algorithm used to assign tokens for the realm. Revoke Refresh Token When Enabled , Red Hat build of Keycloak revokes refresh tokens and issues another token that the client must use. This action applies to OIDC clients performing the refresh token flow. Access Token Lifespan When Red Hat build of Keycloak creates an OIDC access token, this value controls the lifetime of the token. Access Token Lifespan For Implicit Flow With the Implicit Flow, Red Hat build of Keycloak does not provide a refresh token. A separate timeout exists for access tokens created by the Implicit Flow. Client login timeout The maximum time before clients must finish the Authorization Code Flow in OIDC. User-Initiated Action Lifespan The maximum time before a user's action permission expires. Keep this value short because users generally react to self-created actions quickly. Default Admin-Initiated Action Lifespan The maximum time before an action permission sent to a user by an administrator expires. Keep this value long to allow administrators to send e-mails to offline users. An administrator can override the default timeout before issuing the token. Email Verification Specifies independent timeout for email verification. IdP account email verification Specifies independent timeout for IdP account email verification. Forgot password Specifies independent timeout for forgot password. Execute actions Specifies independent timeout for execute actions. Note The following logic is only applied if persistent user sessions are not active: For idle timeouts, a two-minute window of time exists that the session is active. For example, when you have the timeout set to 30 minutes, it will be 32 minutes before the session expires. This action is necessary for some scenarios in cluster and cross-data center environments where the token refreshes on one cluster node a short time before the expiration and the other cluster nodes incorrectly consider the session as expired because they have not yet received the message about a successful refresh from the refreshing node. 6.4. Offline access During offline access logins, the client application requests an offline token instead of a refresh token. The client application saves this offline token and can use it for future logins if the user logs out. This action is useful if your application needs to perform offline actions on behalf of the user even when the user is not online. For example, a regular data backup. The client application is responsible for persisting the offline token in storage and then using it to retrieve new access tokens from the Red Hat build of Keycloak server. The difference between a refresh token and an offline token is that an offline token never expires and is not subject to the SSO Session Idle timeout and SSO Session Max lifespan. The offline token is valid after a user logout. You must use the offline token for a refresh token action at least once per thirty days or for the value of the Offline Session Idle . If you enable Offline Session Max Limited , offline tokens expire after 60 days even if you use the offline token for a refresh token action. You can change this value, Offline Session Max , in the Admin Console. When using offline access, client idle and max timeouts can be overridden at the client level . The options Client Offline Session Idle and Client Offline Session Max , in the client Advanced Settings tab, allow you to have a shorter offline timeouts for a specific application. Note that client session values also control the refresh token expiration but they never affect the global offline user SSO session. The option Client Offline Session Max is only evaluated in the client if Offline Session Max Limited is Enabled at the realm level. If you enable the Revoke Refresh Token option, you can use each offline token once only. After refresh, you must store the new offline token from the refresh response instead of the one. Users can view and revoke offline tokens that Red Hat build of Keycloak grants them in the User Account Console . Administrators can revoke offline tokens for individual users in the Admin Console in the Consents tab. Administrators can view all offline tokens issued in the Offline Access tab of each client. Administrators can revoke offline tokens by setting a revocation policy . To issue an offline token, users must have the role mapping for the realm-level offline_access role. Clients must also have that role in their scope. Clients must add an offline_access client scope as an Optional client scope to the role, which is done by default. Clients can request an offline token by adding the parameter scope=offline_access when sending their authorization request to Red Hat build of Keycloak. The Red Hat build of Keycloak OIDC client adapter automatically adds this parameter when you use it to access your application's secured URL (such as, http://localhost:8080/customer-portal/secured?scope=offline_access). The Direct Access Grant and Service Accounts support offline tokens if you include scope=offline_access in the authentication request body. Red Hat build of Keycloak will limit its internal cache for offline user and offline client sessions to 10000 entries by default, which will reduce the overall memory usage for offline sessions. Items which are evicted from memory will be loaded on-demand from the database when needed. To set different sizes for the caches, edit Red Hat build of Keycloak's cache config file to set a <memory max-count="..."/> for those caches. If you disabled feature persistent-user-sessions , it is possible to reduce memory requirements using a configuration option that shortens lifespan for imported offline sessions. Such sessions will be evicted from the Infinispan caches after the specified lifespan, but still available in the database. This will lower memory consumption, especially for deployments with a large number of offline sessions. To specify the lifespan override for offline user sessions, start Red Hat build of Keycloak server with the following parameter: --spi-user-sessions-infinispan-offline-session-cache-entry-lifespan-override=<lifespan-in-seconds> Similarly for offline client sessions: --spi-user-sessions-infinispan-offline-client-session-cache-entry-lifespan-override=<lifespan-in-seconds> 6.5. Transient sessions You can conduct transient sessions in Red Hat build of Keycloak. When using transient sessions, Red Hat build of Keycloak does not create a user session after successful authentication. Red Hat build of Keycloak creates a temporary, transient session for the scope of the current request that successfully authenticates the user. Red Hat build of Keycloak can run protocol mappers using transient sessions after authentication. The sid and session_state of the tokens are usually empty when the token is issued with transient sessions. So during transient sessions, the client application cannot refresh tokens or validate a specific session. Sometimes these actions are unnecessary, so you can avoid the additional resource use of persisting user sessions. This session saves performance, memory, and network communication (in cluster and cross-data center environments) resources. At this moment, transient sessions are automatically used just during service account authentication with disabled token refresh. Note that token refresh is automatically disabled during service account authentication unless explicitly enabled by client switch Use refresh tokens for client credentials grant .
[ "--spi-user-sessions-infinispan-offline-session-cache-entry-lifespan-override=<lifespan-in-seconds>", "--spi-user-sessions-infinispan-offline-client-session-cache-entry-lifespan-override=<lifespan-in-seconds>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/managing_user_sessions
Chapter 3. Granting administration permissions to manage a CUPS server in the web interface
Chapter 3. Granting administration permissions to manage a CUPS server in the web interface By default, members of the sys , root , and wheel groups can perform administration tasks in the web interface. However, certain other services use these groups as well. For example, members of the wheel groups can, by default, execute commands with root permissions by using sudo . To avoid that CUPS administrators gain unexpected permissions in other services, use a dedicated group for CUPS administrators. Prerequisites CUPS is configured . The IP address of the client you want to use has permissions to access the administration area in the web interface. Procedure Create a group for CUPS administrators: Add the users who should manage the service in the web interface to the cups-admins group: Update the value of the SystemGroup parameter in the /etc/cups/cups-files.conf file, and append the cups-admin group: If only the cups-admin group should have administrative access, remove the other group names from the parameter. Restart CUPS: Verification Use a browser, and access https:// <hostname_or_ip_address> :631/admin/ . Note You can access the administration area in the web UI only if you use the HTTPS protocol. Start performing an administrative task. For example, click Add printer . The web interface prompts for a username and password. To proceed, authenticate by using credentials of a user who is a member of the cups-admins group. If authentication succeeds, this user can perform administrative tasks.
[ "groupadd cups-admins", "usermod -a -G cups-admins <username>", "SystemGroup sys root wheel cups-admins", "systemctl restart cups" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_using_a_cups_printing_server/granting-administration-permissions-to-manage-a-cups-server-in-the-web-interface_configuring-printing