title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
11.3. View | 11.3. View A view is a virtual table. A view contains rows and columns,like a real table. The fields in a view are fields from one or more real tables from the source or other view models. They can also be expressions made up multiple columns, or aggregated columns. When column definitions are not defined on the view table, they will be derived from the projected columns of the view's select transformation that is defined after the AS keyword. You can add functions, JOIN statements and WHERE clauses to a view data as if the data were coming from one single table. This is how you create a view table on a virtual model: | [
"CREATE VIEW CustomerOrders (name varchar(50), saledate date, amount decimal) OPTIONS (CARDINALITY 100, ANNOTATION 'Example') AS SELECT concat(c.firstname, c.lastname) as name, o.saledate as saledate, o.amount as amount FROM Customer C JOIN Order o ON c.id = o.customerid;"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/view |
Metadata APIs | Metadata APIs OpenShift Container Platform 4.13 Reference guide for metadata APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/metadata_apis/index |
Chapter 10. Installing a cluster on AWS into a Secret or Top Secret Region | Chapter 10. Installing a cluster on AWS into a Secret or Top Secret Region In OpenShift Container Platform version 4.15, you can install a cluster on Amazon Web Services (AWS) into the following secret regions: Secret Commercial Cloud Services (SC2S) Commercial Cloud Services (C2S) To configure a cluster in either region, you change parameters in the install config.yaml file before you install the cluster. 10.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multifactor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 10.2. AWS secret regions The following AWS secret partitions are supported: us-isob-east-1 (SC2S) us-iso-east-1 (C2S) Note The maximum supported MTU in an AWS SC2S and C2S Regions is not the same as AWS commercial. For more information about configuring MTU during installation, see the Cluster Network Operator configuration object section in Installing a cluster on AWS with network customizations 10.3. Installation requirements Red Hat does not publish a Red Hat Enterprise Linux CoreOS (RHCOS) Amzaon Machine Image for the AWS Secret and Top Secret Regions. Before you can install the cluster, you must: Upload a custom RHCOS AMI. Manually create the installation configuration file ( install-config.yaml ). Specify the AWS region, and the accompanying custom AMI, in the installation configuration file. You cannot use the OpenShift Container Platform installation program to create the installation configuration file. The installer does not list an AWS region without native support for an RHCOS AMI. Important You must also define a custom CA certificate in the additionalTrustBundle field of the install-config.yaml file because the AWS API requires a custom CA trust bundle. To allow the installation program to access the AWS API, the CA certificates must also be defined on the machine that runs the installation program. You must add the CA bundle to the trust store on the machine, use the AWS_CA_BUNDLE environment variable, or define the CA bundle in the ca_bundle field of the AWS config file. 10.4. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. Note Public zones are not supported in Route 53 in an AWS Top Secret Region. Therefore, clusters must be private if they are deployed to an AWS Top Secret Region. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 10.4.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 10.4.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 10.5. About using a custom VPC In OpenShift Container Platform 4.15, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 10.5.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. A cluster in an SC2S or C2S Region is unable to reach the public IP addresses for the EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: SC2S elasticloadbalancing.<aws_region>.sc2s.sgov.gov ec2.<aws_region>.sc2s.sgov.gov s3.<aws_region>.sc2s.sgov.gov C2S elasticloadbalancing.<aws_region>.c2s.ic.gov ec2.<aws_region>.c2s.ic.gov s3.<aws_region>.c2s.ic.gov With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: SC2S elasticloadbalancing.<aws_region>.sc2s.sgov.gov ec2.<aws_region>.sc2s.sgov.gov s3.<aws_region>.sc2s.sgov.gov C2S elasticloadbalancing.<aws_region>.c2s.ic.gov ec2.<aws_region>.c2s.ic.gov s3.<aws_region>.c2s.ic.gov When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 10.5.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 10.5.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 10.5.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 10.5.5. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 10.6. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 10.7. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 1 1 The RHCOS VMDK version, like 4.15.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 10.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 10.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 10.10. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have uploaded a custom RHCOS AMI. You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for AWS 10.10.1. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 10.1. Machine types based on 64-bit x86 architecture for secret regions c4.* c5.* i3.* m4.* m5.* r4.* r5.* t3.* 10.10.2. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-iso-east-1a - us-iso-east-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-iso-east-1a - us-iso-east-1b replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-iso-east-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{"auths": ...}' 24 additionalTrustBundle: | 25 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 12 14 17 24 Required. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 18 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 19 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 20 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 25 The custom CA certificate. This is required when deploying to the SC2S or C2S Regions because the AWS API requires a custom CA trust bundle. 10.10.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 10.10.4. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 10.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 10.12. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 10.12.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 10.12.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 10.12.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 10.2. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 10.3. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 10.12.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 10.12.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 10.12.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 10.12.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 10.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 10.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 10.15. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console 10.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 10.17. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . | [
"export AWS_PROFILE=<aws_profile> 1",
"export AWS_DEFAULT_REGION=<aws_region> 1",
"export RHCOS_VERSION=<version> 1",
"export VMIMPORT_BUCKET_NAME=<s3_bucket_name>",
"cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF",
"aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2",
"watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}",
"{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }",
"aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-iso-east-1a - us-iso-east-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-iso-east-1a - us-iso-east-1b replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-iso-east-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24 additionalTrustBundle: | 25 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_aws/installing-aws-secret-region |
4.7. Additional Resources | 4.7. Additional Resources This section includes various resources that can be used to learn more about resource monitoring and the Red Hat Enterprise Linux-specific subject matter discussed in this chapter. 4.7.1. Installed Documentation The following resources are installed in the course of a typical Red Hat Enterprise Linux installation and can help you learn more about the subject matter discussed in this chapter. free(1) man page -- Learn how to display free and used memory statistics. vmstat(8) man page -- Learn how to display a concise overview of process, memory, swap, I/O, system, and CPU utilization. sar(1) man page -- Learn how to produce system resource utilization reports. sa2(8) man page -- Learn how to produce daily system resource utilization report files. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-memory-addres |
Chapter 6. Migrating Data Grid clusters on Red Hat OpenShift | Chapter 6. Migrating Data Grid clusters on Red Hat OpenShift Review migration details for Data Grid clusters running on Red Hat OpenShift. 6.1. Data Grid on OpenShift Data Grid 8 introduces Data Grid Operator that provides operational intelligence and reduces management complexity for deploying Data Grid on OpenShift. With Data Grid 8, Data Grid Operator handles most configuration for Data Grid clusters, including authentication, client keystores, external network access, and logging. Data Grid 8.3 introduces a Helm chart for deploying Data Grid clusters on OpenShift. The Data Grid chart provides an alternative for scenarios where it is not possible to deploy clusters that the Data Grid Operator manages, or where you require manual configuration, deployment, and management of Data Grid clusters. Creating Data Grid Services Data Grid 7.3 introduced the Cache service and Data Grid service for creating Data Grid clusters on OpenShift. To create these services in Data Grid 7.3, you import the service templates, if necessary, and then use template parameters and environment variables to configure the services. Creating Cache service nodes in 7.3 Creating Data Grid service nodes in 7.3 Creating services in Data Grid 8 Create an Data Grid Operator subscription. Create an Infinispan Custom Resource (CR) to instantiate and configure Data Grid clusters. apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: example-infinispan spec: replicas: 2 service: type: Cache 1 1 The spec.service.type field specifies whether you create Cache service or Data Grid service nodes. 6.1.1. Container storage Data Grid 7.3 services use storage volumes mounted at /opt/datagrid/standalone/data . Data Grid 8 services use persistent volume claims mounted at /opt/infinispan/server/data . 6.1.2. Data Grid CLI Data Grid 7.3 let you access the CLI through remote shells only. Changes that you made to via the Data Grid 7.3 CLI were bound to the pod and did not survive restarts. With Data Grid 8 you can use the CLI as a fully functional mechanism for performing administrative operations with clusters on OpenShift or manipulating data. 6.1.3. Data Grid console Data Grid 7.3 did not support the console on OpenShift. With Data Grid 8 you can use the console to monitor clusters running on OpenShift, perform administrative operations, and create caches remotely. 6.1.4. Customizing Data Grid Data Grid 7.3 let you use the Source-to-Image (S2I) process and ConfigMap API to customize Data Grid server images running on OpenShift. In Data Grid 8, Red Hat does not support customization of any Data Grid images from the Red Hat Container Registry. Data Grid Operator handles the deployment and management of Data Grid 8 clusters on OpenShift. As a result it is not possible to use custom: Discovery protocols Encryption mechanisms (SYM_ENCRYPT or ASYM_ENCRYPT) Persistent datasources In Data Grid 8.0 and 8.1, Data Grid Operator does not allow you to deploy custom code such as JAR files or other artefacts. In Data Grid 8.2, you can use a persistent volume claim (PVC) to make custom code available to Data Grid clusters. 6.1.5. Deployment configuration templates The deployment configuration templates, and environment variables, that were available in Data Grid 7.3 are removed in Data Grid 8. 6.2. Data Grid 8.2 on OpenShift This topic describes details for migrating from Data Grid 8.1 to 8.2 with Data Grid Operator. Prometheus ServiceMonitor You no longer need to create a ServiceMonitor for Prometheus to scrape Data Grid metrics. Enable monitoring for user-defined projects on OpenShift Container Platform and Data Grid Operator automatically detects when the Prometheus Operator is installed then creates a ServiceMonitor . 6.3. Data Grid 8.3 on OpenShift There are no migration requirements for Data Grid 8.3 deployments with Data Grid Operator. 6.4. Data Grid 8.4 on OpenShift There are no migration requirements for Data Grid 8.4 deployments with Data Grid Operator or Data Grid Helm chart. | [
"oc new-app cache-service -p APPLICATION_USER=USD{USERNAME} -p APPLICATION_PASSWORD=USD{PASSWORD} -p NUMBER_OF_INSTANCES=3 -p REPLICATION_FACTOR=2",
"oc new-app datagrid-service -p APPLICATION_USER=USD{USERNAME} -p APPLICATION_PASSWORD=USD{PASSWORD} -p NUMBER_OF_INSTANCES=3 -e AB_PROMETHEUS_ENABLE=true",
"apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: example-infinispan spec: replicas: 2 service: type: Cache 1"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/migrating_to_data_grid_8/openshift-migration |
Chapter 19. Integrating RHACS using short-lived tokens | Chapter 19. Integrating RHACS using short-lived tokens With Red Hat Advanced Cluster Security for Kubernetes (RHACS) you can authenticate against selected cloud provider APIs using short-lived tokens. RHACS supports the following cloud provider integrations: Amazon Web Services (AWS) using the Secure Token Service (STS) Google Cloud Platform (GCP) using workload identity federation Microsoft Azure using Microsoft Entra ID with managed identities RHACS supports short-lived token integrations only when you install RHACS on the following platforms: Elastic Kubernetes Service (EKS) on AWS Google Kubernetes Engine (GKE) on GCP Microsoft Azure Kubernetes Service (AKS) OpenShift Container Platform To activate short-lived authentication, you must establish trust between your Kubernetes or OpenShift Container Platform cluster and your cloud provider. For EKS, GKE and AKS clusters, use the cloud provider metadata service. For OpenShift Container Platform clusters, you need a publicly available OpenID Connect (OIDC) provider bucket containing the OpenShift Container Platform service account signer key. Note You must establish trust with your cloud provider for every Central cluster that uses the short-lived token integration. However, if you use delegated scanning in combination with short-lived token image integrations, you must also establish trust for the Sensor cluster. 19.1. Configuring AWS Secure Token Service RHACS integrations can authenticate against Amazon Web Services using the Secure Token Service . You must configure AssumeRole with RHACS before enabling the Use container IAM role option in integrations. Important Verify that the AWS role associated with the RHACS pod must have the IAM permissions required by the integration. For example, to set up a container role for integrating with the Elastic Container Registry, enable full read access to the registry. For more information about AWS IAM roles, see IAM roles . 19.1.1. Configuring Elastic Kubernetes Service (EKS) When running Red Hat Advanced Cluster Security for Kubernetes (RHACS) on EKS, you can configure short-lived tokens through the Amazon Secure Token Service. Procedure Run the following command to enable the IAM OpenID Connect (OIDC) provider for your EKS cluster: USD eksctl utils associate-iam-oidc-provider --cluster <cluster_name> --approve Create an IAM role for your EKS cluster. Edit the permission policy of the role and grant the permissions required by the integration. For example: { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:BatchGetImage", "ecr:DescribeImages", "ecr:DescribeRepositories", "ecr:GetAuthorizationToken", "ecr:GetDownloadUrlForLayer", "ecr:ListImages" ], "Resource": "arn:aws:iam::<ecr_registry>:role/<role_name>" } ] } Update the trust relationship for the role that you want to assume: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<ecr-registry>:role/<role_name>" 1 ] }, "Action": "sts:AssumeRole" } ] } 1 The <role_name> should match with the new role that you have created in earlier steps. Enter the following command to associate the newly created role with a service account: USD oc -n stackrox annotate sa central eks.amazonaws.com/role-arn=arn:aws:iam::67890:role/<role_name> 1 1 If you use Kubernetes, enter kubectl instead of oc . Enter the following command to restart the Central pod and apply the changes: USD oc -n stackrox delete pod -l "app in (central,sensor)" 1 1 If you use Kubernetes, enter kubectl instead of oc . 19.1.2. Configuring OpenShift Container Platform When running Red Hat Advanced Cluster Security for Kubernetes (RHACS) on OpenShift Container Platform, you can configure short-lived tokens through the Amazon Secure Token Service. Prerequisites You must have a public OpenID Connect (OIDC) configuration bucket with the OpenShift Container Platform service account signer key. To get the OIDC configuration for the OpenShift Container Platform cluster, Red Hat recommends using the instructions at Cloud Credential Operator in manual mode for short-term credentials . You must have access to AWS IAM and the permissions to create and change roles. Procedure Follow the instructions at Creating OpenID Connect (OIDC) identity providers to create web identity of the OpenShift Container Platform cluster. Use openshift as the value for Audience . Create an IAM role for the web identity of the OpenShift Container Platform cluster. Edit the permission policy of the role and grant the permissions required by the integration. For example: { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:BatchGetImage", "ecr:DescribeImages", "ecr:DescribeRepositories", "ecr:GetAuthorizationToken", "ecr:GetDownloadUrlForLayer", "ecr:ListImages" ], "Resource": "arn:aws:iam::<ecr_registry>:role/<role_name>" } ] } Update the trust relationship for the role that you want to assume: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "<oidc_provider_arn>" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "<oidc_provider_name>:aud": "openshift" } } } ] } Set the following RHACS environment variables on the Central or Sensor deployment: 19.2. Configuring Google workload identity federation RHACS integrations can authenticate against the Google Cloud Platform by using workload identities . Select the Use workload identity option upon creation to enable workload identity authentication in a Google Cloud integration. Important The Google service account associated with the RHACS pod through the workload identity must have the IAM permissions required by the integration. For example, to set up a workload identity for integrating with Google Artifact Registry, connect a service account with the roles/artifactregistry.reader role. For more information about Google IAM roles see Configure roles and permissions . 19.2.1. Configuring Google Kubernetes Engine (GKE) When running Red Hat Advanced Cluster Security for Kubernetes (RHACS) on GKE, you can configure short-lived tokens through Google workload identities. Prerequisites You must have access to the Google Cloud project containing the cluster and integration resources. Procedure Follow the instructions in the Google Cloud Platform documentation to Use workload identity federation for GKE . Annotate the RHACS service account by running the following command: USD oc annotate serviceaccount \ 1 central \ 2 --namespace stackrox \ iam.gke.io/gcp-service-account=<GSA_NAME>@<GSA_PROJECT>.iam.gserviceaccount.com 1 If you use Kubernetes, enter kubectl instead of oc . 2 When setting up delegated scanning, use sensor instead of central . 19.2.2. Configuring OpenShift Container Platform You can configure short-lived tokens through Google workload identities when running Red Hat Advanced Cluster Security for Kubernetes (RHACS) on OpenShift Container Platform. Prerequisites You must have a public OIDC configuration bucket with the OpenShift Container Platform service account signer key. The recommended way to obtain the OIDC configuration for the OpenShift Container Platform cluster is to use the Cloud Credential Operator in manual mode for short-term credentials instructions. Access to a Google Cloud project with the roles/iam.workloadIdentityPoolAdmin role. Procedure Follow the instructions at Manage workload identity pools to create a workload identity pool. For example: USD gcloud iam workload-identity-pools create rhacs-pool \ --location="global" \ --display-name="RHACS workload pool" Follow the instructions at Manage workload identity pool providers to create a workload identity pool provider. For example: USD gcloud iam workload-identity-pools providers create-oidc rhacs-provider \ --location="global" \ --workload-identity-pool="rhacs-pool" \ --display-name="RHACS provider" \ --attribute-mapping="google.subject=assertion.sub" \ --issuer-uri="https://<oidc_configuration_url>" \ --allowed-audiences=openshift Connect a Google service account to the workload identity pool. For example: USD gcloud iam service-accounts add-iam-policy-binding <GSA_NAME>@<GSA_PROJECT>.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --member="principal://iam.googleapis.com/projects/<GSA_PROJECT_NUMBER>/locations/global/workloadIdentityPools/rhacs-provider/subject/system:serviceaccount:stackrox:central" 1 1 For delegated scanning, set the subject to system:serviceaccount:stackrox:sensor . Create a service account JSON containing the Security token service (STS) configuration. For example: { "type": "external_account", "audience": "//iam.googleapis.com/projects/<GSA_PROJECT_ID>/locations/global/workloadIdentityPools/rhacs-pool/providers/rhacs-provider", "subject_token_type": "urn:ietf:params:oauth:token-type:jwt", "token_url": "https://sts.googleapis.com/v1/token", "service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<GSA_NAME>@<GSA_PROJECT>.iam.gserviceaccount.com:generateAccessToken", "credential_source": { "file": "/var/run/secrets/openshift/serviceaccount/token", "format": { "type": "text" } } } Use the service account JSON as a secret to the RHACS namespace: apiVersion: v1 kind: Secret metadata: name: gcp-cloud-credentials namespace: stackrox data: credentials: <base64_encoded_json> 19.3. Configuring Microsoft Entra ID federation RHACS integrations can authenticate to Microsoft Azure by using managed or workload identities. Select the Use workload identity checkbox during the creation of a new Microsoft Azure Container Registry (ACR) integration, if you want to enable authentication by using managed or workload identities in a Microsoft Azure integration. For more information about Azure managed identities, see What are managed identities for Azure resources? (Microsoft Azure documentation). For more information about Azure workload identities, see Workload identity federation (Microsoft Azure documentation). Important The identity associated with the RHACS pod through the workload identity must have the IAM permissions for the integration. For example, to set up a workload identity for integrating with Microsoft ACR, assign the Reader role over a scope that includes the registry. For more information about Microsoft Azure IAM roles, see Azure RBAC documentation (Microsoft Azure documentation). 19.3.1. Configuring Microsoft Azure Kubernetes Service By running Red Hat Advanced Cluster Security for Kubernetes (RHACS) on Microsoft Azure Kubernetes Service (AKS), you can configure short-lived tokens by using Microsoft Entra ID managed identities. Prerequisites You have access to the cluster and integration resources within Microsoft Azure. Procedure Create a trust relationship between the external IdP and a user-assigned managed identity or application in Microsoft Entra ID. For more information, see Workload identity federation (Microsoft Azure documentation). Annotate the RHACS service account by running the following command: USD oc annotate serviceaccount \ 1 central \ 2 --namespace stackrox \ azure.workload.identity/client-id=<CLIENT_ID> 3 1 If you use Kubernetes, enter kubectl instead of oc . 2 When setting up the delegated scanning, use sensor instead of central . 3 Enter the client ID of the associated identity. Example output serviceaccount/central annotated 19.3.2. Configuring OpenShift Container Platform By running Red Hat Advanced Cluster Security for Kubernetes (RHACS) on OpenShift Container Platform, you can configure short-lived tokens by using Microsoft Entra ID managed identities. Prerequisites You have a public OpenID Connect (OIDC) configuration bucket with the OpenShift Container Platform service account signer key. For more information, see "Manual mode with short-term credentials for components" in OpenShift Container Platform documentation. You have a Microsoft Entra ID user-assigned managed identity. You have access to a Microsoft Azure subscription with the permission to assign role assignments. Procedure To add the federated identity credentials to a user-assigned managed identity, run the following command, for example: USD az identity federated-credential create \ --name "USD{FEDERATED_CREDENTIAL_NAME}" \ --identity-name "USD{MANAGED_IDENTITY_NAME}" \ 1 --resource-group "USD{RESOURCE_GROUP}" \ --issuer "USD{OIDC_ISSUER_URL}" \ 2 --subject system:serviceaccount:stackrox:central \ 3 --audience openshift 1 The managed identity must have all the permissions for federation. 2 The issuer must match the service account token issuer of the OpenShift Container Platform cluster. 3 For delegated scanning, set the subject to system:serviceaccount:stackrox:sensor . For more information about how to configure short-lived tokens by using Microsoft Entra ID managed identities, see Configure a user-assigned managed identity to trust an external identity provider (Microsoft Azure documentation). | [
"eksctl utils associate-iam-oidc-provider --cluster <cluster_name> --approve",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"ecr:BatchCheckLayerAvailability\", \"ecr:BatchGetImage\", \"ecr:DescribeImages\", \"ecr:DescribeRepositories\", \"ecr:GetAuthorizationToken\", \"ecr:GetDownloadUrlForLayer\", \"ecr:ListImages\" ], \"Resource\": \"arn:aws:iam::<ecr_registry>:role/<role_name>\" } ] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::<ecr-registry>:role/<role_name>\" 1 ] }, \"Action\": \"sts:AssumeRole\" } ] }",
"oc -n stackrox annotate sa central eks.amazonaws.com/role-arn=arn:aws:iam::67890:role/<role_name> 1",
"oc -n stackrox delete pod -l \"app in (central,sensor)\" 1",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"ecr:BatchCheckLayerAvailability\", \"ecr:BatchGetImage\", \"ecr:DescribeImages\", \"ecr:DescribeRepositories\", \"ecr:GetAuthorizationToken\", \"ecr:GetDownloadUrlForLayer\", \"ecr:ListImages\" ], \"Resource\": \"arn:aws:iam::<ecr_registry>:role/<role_name>\" } ] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"<oidc_provider_arn>\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"<oidc_provider_name>:aud\": \"openshift\" } } } ] }",
"AWS_ROLE_ARN=<role_arn> AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/openshift/serviceaccount/token",
"oc annotate serviceaccount \\ 1 central \\ 2 --namespace stackrox iam.gke.io/gcp-service-account=<GSA_NAME>@<GSA_PROJECT>.iam.gserviceaccount.com",
"gcloud iam workload-identity-pools create rhacs-pool --location=\"global\" --display-name=\"RHACS workload pool\"",
"gcloud iam workload-identity-pools providers create-oidc rhacs-provider --location=\"global\" --workload-identity-pool=\"rhacs-pool\" --display-name=\"RHACS provider\" --attribute-mapping=\"google.subject=assertion.sub\" --issuer-uri=\"https://<oidc_configuration_url>\" --allowed-audiences=openshift",
"gcloud iam service-accounts add-iam-policy-binding <GSA_NAME>@<GSA_PROJECT>.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member=\"principal://iam.googleapis.com/projects/<GSA_PROJECT_NUMBER>/locations/global/workloadIdentityPools/rhacs-provider/subject/system:serviceaccount:stackrox:central\" 1",
"{ \"type\": \"external_account\", \"audience\": \"//iam.googleapis.com/projects/<GSA_PROJECT_ID>/locations/global/workloadIdentityPools/rhacs-pool/providers/rhacs-provider\", \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<GSA_NAME>@<GSA_PROJECT>.iam.gserviceaccount.com:generateAccessToken\", \"credential_source\": { \"file\": \"/var/run/secrets/openshift/serviceaccount/token\", \"format\": { \"type\": \"text\" } } }",
"apiVersion: v1 kind: Secret metadata: name: gcp-cloud-credentials namespace: stackrox data: credentials: <base64_encoded_json>",
"oc annotate serviceaccount \\ 1 central \\ 2 --namespace stackrox azure.workload.identity/client-id=<CLIENT_ID> 3",
"serviceaccount/central annotated",
"az identity federated-credential create --name \"USD{FEDERATED_CREDENTIAL_NAME}\" --identity-name \"USD{MANAGED_IDENTITY_NAME}\" \\ 1 --resource-group \"USD{RESOURCE_GROUP}\" --issuer \"USD{OIDC_ISSUER_URL}\" \\ 2 --subject system:serviceaccount:stackrox:central \\ 3 --audience openshift"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/integrating/integrate-using-short-lived-tokens |
Chapter 10. Scheduling resources | Chapter 10. Scheduling resources Taints and tolerations allow the node to control which pods should (or should not) be scheduled on them. A node selector specifies a map of key/value pairs that are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the same key/value node selector as the label on the node. 10.1. Network Observability deployment in specific nodes You can configure the FlowCollector to control the deployment of Network Observability components in specific nodes. The spec.agent.ebpf.advanced.scheduling , spec.processor.advanced.scheduling , and spec.consolePlugin.advanced.scheduling specifications have the following configurable settings: NodeSelector Tolerations Affinity PriorityClassName Sample FlowCollector resource for spec.<component>.advanced.scheduling apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: # ... advanced: scheduling: tolerations: - key: "<taint key>" operator: "Equal" value: "<taint value>" effect: "<taint effect>" nodeSelector: <key>: <value> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: name operator: In values: - app-worker-node priorityClassName: """ # ... Additional resources Understanding taints and tolerations Assign Pods to Nodes (Kubernetes documentation) Pod Priority and Preemption (Kubernetes documentation) | [
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: advanced: scheduling: tolerations: - key: \"<taint key>\" operator: \"Equal\" value: \"<taint value>\" effect: \"<taint effect>\" nodeSelector: <key>: <value> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: name operator: In values: - app-worker-node priorityClassName: \"\"\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/network_observability/network-observability-scheduling-resources |
Chapter 1. Cross-site replication | Chapter 1. Cross-site replication This section explains Data Grid cross-site replication capabilities, including details about relay nodes, state transfer, and client connections for remote caches. 1.1. Cross-site replication Data Grid can back up data between clusters running in geographically dispersed data centers and across different cloud providers. Cross-site replication provides Data Grid with a global cluster view and: Guarantees service continuity in the event of outages or disasters. Presents client applications with a single point of access to data in globally distributed caches. Figure 1.1. Cross-site replication 1.2. Relay nodes Relay nodes are the nodes in Data Grid clusters that are responsible for sending and receiving requests from backup locations. If a node is not a relay node, it must forward backup requests to a local relay node. Only relay nodes can send requests to backup locations. For optimal performance, you should configure all nodes as relay nodes. This increases the speed of backup requests because each node in the cluster can backup to remote sites directly without having to forward backup requests to local relay nodes. Note Diagrams in this document illustrate Data Grid clusters with one relay node because this is the default for the JGroups RELAY2 protocol. Likewise, a single relay node is easier to illustrate because each relay node in a cluster communicates with each relay node in the remote cluster. Note JGroups configuration refers to relay nodes as "site master" nodes. Data Grid uses relay node instead because it is more descriptive and presents a more intuitive choice for our users. 1.3. Data Grid cache backups Data Grid caches include a backups configuration that let you name remote sites as backup locations. For example, the following diagram shows three caches, "customers", "eu-orders", and "us-orders": In LON , "customers" names NYC as a backup location. In NYC , "customers" names LON as a backup location. "eu-orders" and "us-orders" do not have backups and are local to the respective cluster. 1.4. Backup strategies Data Grid replicates data between clusters at the same time that writes to caches occur. For example, if a client writes "k1" to LON , Data Grid backs up "k1" to NYC at the same time. To back up data to a different cluster, Data Grid can use either a synchronous or asynchronous strategy. Synchronous strategy When Data Grid replicates data to backup locations, it writes to the cache on the local cluster and the cache on the remote cluster concurrently. With the synchronous strategy, Data Grid waits for both write operations to complete before returning. You can control how Data Grid handles writes to the cache on the local cluster if backup operations fail. Data Grid can do the following: Ignore the failed backup and silently continue the write to the local cluster. Log a warning message or throw an exception and continue the write to the local cluster. Handle failed backup operations with custom logic. Synchronous backups also support two-phase commits with caches that participate in optimistic transactions. The first phase of the backup acquires a lock. The second phase commits the modification. Important Two-phase commit with cross-site replication has a significant performance impact because it requires two round-trips across the network. Asynchronous strategy When Data Grid replicates data to backup locations, it does not wait until the operation completes before writing to the local cache. Asynchronous backup operations and writes to the local cache are independent of each other. If backup operations fail, write operations to the local cache continue and no exceptions occur. When this happens Data Grid also retries the write operation until the remote cluster disconnects from the cross-site view. Synchronous vs asynchronous backups Synchronous backups offer the strongest guarantee of data consistency across sites. If strategy=sync , when cache.put() calls return you know the value is up to date in the local cache and in the backup locations. The trade-off for this consistency is performance. Synchronous backups have much greater latency in comparison to asynchronous backups. Asynchronous backups, on the other hand, do not add latency to client requests so they have no performance impact. However, if strategy=async , when cache.put() calls return you cannot be sure of that the value in the backup location is the same as in the local cache. 1.5. Automatic offline parameters for backup locations Operations to replicate data across clusters are resource intensive, using excessive RAM and CPU. To avoid wasting resources Data Grid can take backup locations offline when they stop accepting requests after a specific period of time. Data Grid takes remote sites offline based on the number of failed sequential requests and the time interval since the first failure. Requests are failed when the target cluster does not have any nodes in the cross-site view (JGroups bridge) or when a timeout expires before the target cluster acknowledges the request. Backup timeouts Backup configurations include timeout values for operations to replicate data between clusters. If operations do not complete before the timeout expires, Data Grid records them as failures. In the following example, operations to replicate data to NYC are recorded as failures if they do not complete after 10 seconds: XML <distributed-cache> <backups> <backup site="NYC" strategy="ASYNC" timeout="10000" /> </backups> </distributed-cache> JSON { "distributed-cache": { "backups": { "NYC" : { "backup" : { "strategy" : "ASYNC", "timeout" : "10000" } } } } } YAML distributedCache: backups: NYC: backup: strategy: "ASYNC" timeout: "10000" Number of failures You can specify the number of consecutive failures that can occur before backup locations go offline. In the following example, if a cluster attempts to replicate data to NYC and five consecutive operations fail, NYC automatically goes offline: XML <distributed-cache> <backups> <backup site="NYC" strategy="ASYNC" timeout="10000"> <take-offline after-failures="5"/> </backup> </backups> </distributed-cache> JSON { "distributed-cache": { "backups": { "NYC" : { "backup" : { "strategy" : "ASYNC", "timeout" : "10000", "take-offline" : { "after-failures" : "5" } } } } } } YAML distributedCache: backups: NYC: backup: strategy: "ASYNC" timeout: "10000" takeOffline: afterFailures: "5" Time to wait You can also specify how long to wait before taking sites offline when backup operations fail. If a backup request succeeds before the wait time runs out, Data Grid does not take the site offline. One or two minutes is generally a suitable time to wait before automatically taking backup locations offline. If the wait period is too short then backup locations go offline too soon. You then need to bring clusters back online and perform state transfer operations to ensure data is in sync between the clusters. A negative or zero value for the number of failures is equivalent to a value of 1 . Data Grid uses only a minimum time to wait to take backup locations offline after a failure occurs, for example: <take-offline after-failures="-1" min-wait="10000"/> In the following example, if a cluster attempts to replicate data to NYC and there are more than five consecutive failures and 15 seconds elapse after the first failed operation, NYC automatically goes offline: XML <distributed-cache> <backups> <backup site="NYC" strategy="ASYNC" timeout="10000"> <take-offline after-failures="5" min-wait="15000"/> </backup> </backups> </distributed-cache> JSON { "distributed-cache": { "backups": { "NYC" : { "backup" : { "strategy" : "ASYNC", "timeout" : "10000", "take-offline" : { "after-failures" : "5", "min-wait" : "15000" } } } } } } YAML distributedCache: backups: NYC: backup: strategy: "ASYNC" timeout: "10000" takeOffline: afterFailures: "5" minWait: "15000" 1.6. State transfer State transfer is an administrative operation that synchronizes data between sites. For example, LON goes offline and NYC starts handling client requests. When you bring LON back online, the Data Grid cluster in LON does not have the same data as the cluster in NYC . To ensure the data is consistent between LON and NYC , you can push state from NYC to LON . State transfer is bidirectional. For example, you can push state from NYC to LON or from LON to NYC . Pushing state to offline sites brings them back online. State transfer overwrites only data that exists on both sites, the originating site and the receiving site. Data Grid does not delete data. For example, "k2" exists on LON and NYC . "k2" is removed from NYC while LON is offline. When you bring LON back online, "k2" still exists at that location. If you push state from NYC to LON , the transfer does not affect "k2" on LON . Tip To ensure contents of the cache are identical after state transfer, remove all data from the cache on the receiving site before pushing state. Use the clear() method or the clearcache command from the CLI. State transfer does not overwrite updates to data that occur after you initiate the push. For example, "k1,v1" exists on LON and NYC . LON goes offline so you push state transfer to LON from NYC , which brings LON back online. Before state transfer completes, a client puts "k1,v2" on LON . In this case the state transfer from NYC does not overwrite "k1,v2" because that modification happened after you initiated the push. Automatic state transfer By default, you must manually perform cross-site state transfer operations with the CLI or via JMX or REST. However, when using the asynchronous backup strategy, Data Grid can automatically perform cross-site state transfer operations. When a backup location comes back online, and the network connection is stable, Data Grid initiates bidirectional state transfer between backup locations. For example, Data Grid simultaneously transfers state from LON to NYC and NYC to LON . Note To avoid temporary network disconnects triggering state transfer operations, there are two conditions that backup locations must meet to go offline. The status of a backup location must be offline and it must not be included in the cross-site view with JGroups RELAY2. The automatic state transfer is also triggered when a cache starts. In the scenario where LON is starting up, after a cache starts, it sends a notification to NYC . Following this, NYC starts a unidirectional state transfer to LON . Additional resources org.infinispan.Cache.clear() Using the Data Grid Command Line Interface Data Grid REST API 1.7. Client connections across sites Clients can write to Data Grid clusters in either an Active/Passive or Active/Active configuration. Active/Passive The following diagram illustrates Active/Passive where Data Grid handles client requests from one site only: In the preceding image: Client connects to the Data Grid cluster at LON . Client writes "k1" to the cache. The relay node at LON , "n1", sends the request to replicate "k1" to the relay node at NYC , "nA". With Active/Passive, NYC provides data redundancy. If the Data Grid cluster at LON goes offline for any reason, clients can start sending requests to NYC . When you bring LON back online you can synchronize data with NYC and then switch clients back to LON . Active/Active The following diagram illustrates Active/Active where Data Grid handles client requests at two sites: In the preceding image: Client A connects to the Data Grid cluster at LON . Client A writes "k1" to the cache. Client B connects to the Data Grid cluster at NYC . Client B writes "k2" to the cache. Relay nodes at LON and NYC send requests so that "k1" is replicated to NYC and "k2" is replicated to LON . With Active/Active both NYC and LON replicate data to remote caches while handling client requests. If either NYC or LON go offline, clients can start sending requests to the online site. You can then bring offline sites back online, push state to synchronize data, and switch clients as required. Backup strategies and client connections Important An asynchronous backup strategy ( strategy=async ) is recommended with Active/Active configurations. If multiple clients attempt to write to the same entry concurrently, and the backup strategy is synchronous ( strategy=sync ), then deadlocks occur. However you can use the synchronous backup strategy with an Active/Passive configuration if both sites access different data sets, in which case there is no risk of deadlocks from concurrent writes. 1.7.1. Concurrent writes and conflicting entries Conflicting entries can occur with Active/Active site configurations if clients write to the same entries at the same time but at different sites. For example, client A writes to "k1" in LON at the same time that client B writes to "k1" in NYC . In this case, "k1" has a different value in LON than in NYC . After replication occurs, there is no guarantee which value for "k1" exists at which site. To ensure data consistency, Data Grid uses a vector clock algorithm to detect conflicting entries during backup operations, as in the following illustration: Vector clocks are timestamp metadata that increment with each write to an entry. In the preceding example, 0,0 represents the initial value for the vector clock on "k1". A client puts "k1=2" in LON and the vector clock is 1,0 , which Data Grid replicates to NYC . A client then puts "k1=3" in NYC and the vector clock updates to 1,1 , which Data Grid replicates to LON . However if a client puts "k1=5" in LON at the same time that a client puts "k1=8" in NYC , Data Grid detects a conflicting entry because the vector value for "k1" is not strictly greater or less between LON and NYC . When it finds conflicting entries, Data Grid uses the Java compareTo(String anotherString) method to compare site names. To determine which key takes priority, Data Grid selects the site name that is lexicographically less than the other. Keys from a site named AAA take priority over keys from a site named AAB and so on. Following the same example, to resolve the conflict for "k1", Data Grid uses the value for "k1" that originates from LON . This results in "k1=5" in both LON and NYC after Data Grid resolves the conflict and replicates the value. Tip Prepend site names with numbers as a simple way to represent the order of priority for resolving conflicting entries; for example, 1LON and 2NYC . Backup strategies Data Grid performs conflict resolution with the asynchronous backup strategy ( strategy=async ) only. You should never use the synchronous backup strategy with an Active/Active configuration. In this configuration concurrent writes result in deadlocks and you lose data. However you can use the synchronous backup strategy with an Active/Active configuration if both sites access different data sets, in which case there is no risk of deadlocks from concurrent writes. Cross-site merge policies Data Grid provides an XSiteEntryMergePolicy SPI in addition to cross-site merge policies that configure Data Grid to do the following: Always remove conflicting entries. Apply write operations when write/remove conflicts occur. Remove entries when write/remove conflicts occur. Additional resources XSiteMergePolicy enum lists all merge polices that Data Grid provides XSiteEntryMergePolicy SPI java.lang.String#compareTo() 1.8. Expiration with cross-site replication Expiration removes cache entries based on time. Data Grid provides two ways to configure expiration for entries: Lifespan The lifespan attribute sets the maximum amount of time that entries can exist. When you set lifespan with cross-site replication, Data Grid clusters expire entries independently of remote sites. Maximum idle The max-idle attribute specifies how long entries can exist based on read or write operations in a given time period. When you set a max-idle with cross-site replication, Data Grid clusters send touch commands to coordinate idle timeout values with remote sites. Note Using maximum idle expiration in cross-site deployments can impact performance because the additional processing to keep max-idle values synchronized means some operations take longer to complete. | [
"<distributed-cache> <backups> <backup site=\"NYC\" strategy=\"ASYNC\" timeout=\"10000\" /> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"backups\": { \"NYC\" : { \"backup\" : { \"strategy\" : \"ASYNC\", \"timeout\" : \"10000\" } } } } }",
"distributedCache: backups: NYC: backup: strategy: \"ASYNC\" timeout: \"10000\"",
"<distributed-cache> <backups> <backup site=\"NYC\" strategy=\"ASYNC\" timeout=\"10000\"> <take-offline after-failures=\"5\"/> </backup> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"backups\": { \"NYC\" : { \"backup\" : { \"strategy\" : \"ASYNC\", \"timeout\" : \"10000\", \"take-offline\" : { \"after-failures\" : \"5\" } } } } } }",
"distributedCache: backups: NYC: backup: strategy: \"ASYNC\" timeout: \"10000\" takeOffline: afterFailures: \"5\"",
"<take-offline after-failures=\"-1\" min-wait=\"10000\"/>",
"<distributed-cache> <backups> <backup site=\"NYC\" strategy=\"ASYNC\" timeout=\"10000\"> <take-offline after-failures=\"5\" min-wait=\"15000\"/> </backup> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"backups\": { \"NYC\" : { \"backup\" : { \"strategy\" : \"ASYNC\", \"timeout\" : \"10000\", \"take-offline\" : { \"after-failures\" : \"5\", \"min-wait\" : \"15000\" } } } } } }",
"distributedCache: backups: NYC: backup: strategy: \"ASYNC\" timeout: \"10000\" takeOffline: afterFailures: \"5\" minWait: \"15000\"",
"LON NYC k1=(n/a) 0,0 0,0 k1=2 1,0 --> 1,0 k1=2 k1=3 1,1 <-- 1,1 k1=3 k1=5 2,1 1,2 k1=8 --> 2,1 (conflict) (conflict) 1,2 <--"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_cross-site_replication/cross-site-replication |
5.85. glib2 | 5.85. glib2 5.85.1. RHBA-2012:0794 - glib2 bug fix update Updated glib2 packages that fix one bug are now available for Red Hat Enterprise Linux 6. GLib is a low-level core library that forms the basis for projects such as GTK+ and GNOME. It provides data structure handling for C, portability wrappers, and interfaces for such runtime functionality as an event loop, threads, dynamic loading, and an object system. Bug Fix BZ# 782194 Prior to this upate, the gtester-report script was not marked as executable in the glib2-devel package. As a consequence, the gtester-report did not run with the default permissions. This update changes the glib2-devel package definition so that this script is now executable. All users are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/glib2 |
Chapter 2. Installing and Uninstalling an Identity Management Server | Chapter 2. Installing and Uninstalling an Identity Management Server An Identity Management (IdM) server is a domain controller: it defines and manages the IdM domain. To set set up an IdM server, you must: Install the necessary packages Configure the machine using setup scripts Red Hat strongly recommends to set up multiple domain controllers within your domain for load balancing and redundancy. These additional servers are replicas of the initial master IdM server. This chapter describes installing the first, initial IdM server. For information on installing a replica from the initial server, see Chapter 4, Installing and Uninstalling Identity Management Replicas . 2.1. Prerequisites for Installing a Server 2.1.1. Minimal Hardware Requirements To run Identity Management (IdM), the server requires at a minimum the following hardware configuration: 1 (virtual) CPU core 2 GB RAM Even if you can install IdM with less RAM, certain operations, such as updating IdM, require at least 4 GB RAM. 10 GB hard disk Important Depending on the amount of data stored in the database, IdM requires more resources, especially more RAM. For details, see Section 2.1.2, "Hardware Recommendations" . The required hardware resources also depend on other factors, such as the production workload of the server or if a trust with Active Directory is configured. 2.1.2. Hardware Recommendations RAM is the most important hardware feature to size properly. To determine how much RAM you require, consider these recommendations: For 10,000 users and 100 groups: at least 3 GB of RAM and 1 GB swap space For 100,000 users and 50,000 groups: at least 16 GB of RAM and 4 GB of swap space Note A basic user entry or a simple host entry with a certificate is approximately 5 - 10 KiB in size. For larger deployments, it is more effective to increase the RAM than to increase disk space because much of the data is stored in cache. To increase performance, you can tune the underlying Directory Server to increase performance. For details, see the Red Hat Directory Server Performance Tuning Guide . 2.1.3. System Requirements Identity Management is supported on Red Hat Enterprise Linux 7. Install an IdM server on a clean system without any custom configuration for services such as DNS, Kerberos, or Directory Server. Important For performance and stability reasons, Red Hat recommends that you do not install other applications or services on IdM servers. For example, IdM servers can be exhaustive to the system, especially if the number of LDAP objects is high. Also, IdM is integrated in the system and, if third party applications change configuration files IdM depends on, IdM can break. The IdM server installation overwrites system files to set up the IdM domain. IdM backs up the original system files to /var/lib/ipa/sysrestore/ . Name Service Cache Daemon (NSCD) requirements Red Hat recommends to disable NSCD on Identity Management machines. Alternatively, if disabling NSCD is not possible, only enable NSCD for maps that SSSD does not cache. Both NSCD and the SSSD service perform caching, and problems can occur when systems use both services simultaneously. See the System-Level Authentication Guide for information on how to avoid conflicts between NSCD and SSSD. IPv6 must be enabled on the system The IdM server must have the IPv6 protocol enabled in the kernel. Note that IPv6 is enabled by default on Red Hat Enterprise Linux 7 systems. If you disabled IPv6 before, re-enable it as described in How do I disable or enable the IPv6 protocol in Red Hat Enterprise Linux? in Red Hat Knowledgebase. Note IdM does not require the IPv6 protocol to be enabled in the kernel of the hosts you want to enroll as clients. For example, if your internal network only uses the IPv4 protocol, you can configure the System Security Services Daemon (SSSD) to only use IPv4 to communicate with the IdM server. You can do this by inserting the following line into the [domain/_NAME_] section of the /etc/sssd/sssd.conf file: For more information on the lookup_family_order , see the sssd.conf(5) man page. 2.1.4. Prerequisites for Installing a Server in a FIPS Environment In environments set up using Red Hat Enterprise Linux 7.4 and later: You can configure a new IdM server or replica on a system with the Federal Information Processing Standard (FIPS) mode enabled. The installation script automatically detects a system with FIPS enabled and configures IdM without the administrator's intervention. To enable FIPS in the operating system, see Enabling FIPS Mode in the Security Guide . Important You cannot: Enable FIPS mode on existing IdM servers previously installed with FIPS mode disabled. Install a replica in FIPS mode when using an existing IdM server with FIPS mode disabled. In environments set up using Red Hat Enterprise Linux 7.3 and earlier: IdM does not support the FIPS mode. Disable FIPS on your system before installing an IdM server or replica, and do not enable it after the installation. For further details about FIPS mode, see Federal Information Processing Standard (FIPS) in the Security Guide . 2.1.5. Host Name and DNS Configuration Warning Be extremely cautious and ensure that: you have a tested and functional DNS service available the service is properly configured This requirement applies to IdM servers with integrated DNS services as well as to IdM servers installed without DNS. DNS records are vital for nearly all IdM domain functions, including running LDAP directory services, Kerberos, and Active Directory integration. Note that the primary DNS domain and Kerberos realm cannot be changed after the installation. Do not use single-label domain names, for example .company : the IdM domain must be composed of one or more subdomains and a top level domain, for example example.com or company.example.com . The server host must have DNS properly configured regardless of whether the DNS server is integrated within IdM or hosted externally. Identity Management requires one separate DNS domain to be used for service records. To avoid conflicts on the DNS level, the primary IdM DNS domain , the DNS domain whose name is the lower-case version of the IdM Kerberos name, cannot be shared with any other system, such as other IdM or AD domains The primary IdM DNS domain must contain its own SRV records for standard IdM services. The required records are: the SRV record of both _kerberos._tcp. domain_name and _kerberos._udp. domain_name the SRV record of _ldap._tcp. domain_name the TXT record of _kerberos. domain_name When an enrolled client, via the ipa command-line tool, is looking for a service provided or mediated by IdM, it looks up the server specified by the xmlrpc_uri parameter in the /etc/ipa/default.conf file. If need be, it also looks up the IdM DNS domain name given in the domain parameter in the same file, and consults the _ldap._tcp. domain_name SRV record for that domain to identify the server it is looking for. If there is no domain given in the /etc/ipa/default.conf file, the client only contacts the server that is set in the xmlrpc_uri parameter of the file. Note that the host names of IdM clients and servers are not required to be part of the primary DNS domain. However, in trust environments with Active Directory (AD), the host names of IdM servers must be part of the IdM-owned domain, the domain associated with the IdM realm, and not part of the AD-owned domain, the domain associated with the trusted AD realm. From the perspective of the trust, this association is managed using Realm domains . For information on configuring users to access an IdM client using a host name from the Active Directory DNS domain, while the client itself is joined to IdM, see IdM clients in an Active Directory DNS Domain in the Windows Integration Guide . Verifying the Server Host Name The host name must be a fully qualified domain name, such as server.example.com . Important Do not use single-label domain names, for example .company: the IdM domain must be composed of one or more subdomains and a top level domain, for example example.com or company.example.com. The fully qualified domain name must meet the following conditions: It is a valid DNS name, which means only numbers, alphabetic characters, and hyphens (-) are allowed. Other characters, such as underscores (_), in the host name cause DNS failures. It is all lower-case. No capital letters are allowed. The fully qualified domain name must not resolve to the loopback address. It must resolve to the machine's public IP address, not to 127.0.0.1 . For other recommended naming practices, see the Recommended Naming Practices in the Red Hat Enterprise Linux Security Guide . To verify your machine's host name, use the hostname utility: The output of hostname must not be localhost or localhost6 . Verifying the Forward and Reverse DNS Configuration Obtain the IP address of the server. The ip addr show command displays both the IPv4 and IPv6 addresses: The IPv4 address is displayed on the line starting with inet . In the following example, the configured IPv4 address is 192.0.2.1 . The IPv6 address is displayed on the line starting with inet6 . Only IPv6 addresses with scope global are relevant for this procedure. In the following example, the returned IPv6 address is 2001:DB8::1111 . Verify the forward DNS configuration by using the dig utility and adding the host name. Run the dig +short server.example.com A command. The returned IPv4 address must match the IP address returned by ip addr show : Run the dig +short server.example.com AAAA command. If the command returns an address, it must match the IPv6 address returned by ip addr show : Note If no output is returned for the AAAA record, it does not indicate incorrect configuration; no output only means that no IPv6 address is configured in DNS for the server machine. If you do not intend to use the IPv6 protocol in your network, you can proceed with the installation in this situation. Verify the reverse DNS configuration (PTR records) by using the dig utility and adding the IP address. Run the dig +short -x IPv4 address command. The server host name must be displayed in the command output. For example: Use dig to query the IPv6 address as well if the dig +short -x server.example.com AAAA command in the step returned an IPv6 address. Again, the server host name must be displayed in the command output. For example: Note If dig +short server.example.com AAAA in the step did not display any IPv6 address, querying the AAAA record does not output anything. In this case, this is normal behavior and does not indicate incorrect configuration. If a different host name or no host name is displayed, even though dig +short server.example.com in the step returned an IP address, it indicates that the reverse DNS configuration is incorrect. Verifying the Standards-compliance of DNS Forwarders When configuring IdM with integrated DNS, it is recommended to use DNS Security Extensions (DNSSEC) records validation. By validating signed DNS records from other servers, you protect your IdM installation against spoofed addresses. However, DNSSEC validation is not a hard requirement for a successful IdM installation. IdM installer enables DNSSEC records validation by default. For successful DNSSEC validation, it is crucial to have forwarders on which DNSSEC has been properly configured. During installation, IdM checks global forwarders, and if a forwarder does not support DNSSEC, the DNSSEC validation will be disabled on the forwarder. To verify that all DNS forwarders you want to use with the IdM DNS server comply with the Extension Mechanisms for DNS (EDNS0) and DNSSEC standards: The expected output displayed by the command contains the following information: status: NOERROR flags: ra EDNS flags: do The RRSIG record must be present in the ANSWER section If any of these items is missing from the output, inspect the documentation of your DNS forwarder and verify that EDNS0 and DNSSEC are supported and enabled. In latest versions of the BIND server, the dnssec-enable yes; option must be set in the /etc/named.conf file. For example, the expected output can look like this: The /etc/hosts File Important Do not modify the /etc/hosts file manually. If /etc/hosts has been modified, make sure its contents conform to the following rules. The following is an example of a correctly configured /etc/hosts file. It properly lists the IPv4 and IPv6 localhost entries for the host, followed by the IdM server IP address and host name as the first entry. Note that the IdM server host name cannot be part of the localhost entry. 2.1.6. Port Requirements IdM uses a number of ports to communicate with its services. These ports must be open and available for IdM to work. They cannot be in use by another service or blocked by a firewall. For a list of the required ports, see the section called "List of Required Ports" . For a list of firewalld services that correspond to the required ports, see the section called "List of firewalld Services" . List of Required Ports Table 2.1. Identity Management Ports Service Ports Protocol HTTP/HTTPS 80, 443 TCP LDAP/LDAPS 389, 636 TCP Kerberos 88, 464 TCP and UDP DNS 53 TCP and UDP NTP 123 UDP Note Do not be concerned that IdM uses ports 80 and 389. Port 80 (HTTP) is used to provide Online Certificate Status Protocol (OCSP) responses and Certificate Revocation Lists (CRL). Both are digitally signed and therefore secured against man-in-the-middle attacks. Port 389 (LDAP) uses STARTTLS and GSSAPI for encryption. In addition, IdM can listen on port 8080 and in some installations also on ports 8443 and 749. However, these three ports are only used internally: even though IdM keeps them open, they are not required to be accessible from outside. It is recommended that you do not open ports 8080, 8443, and 749 and instead leave them blocked by a firewall. List of firewalld Services Table 2.2. firewalld Services Service name For details, see: freeipa-ldap /usr/lib/firewalld/services/freeipa-ldap.xml freeipa-ldaps /usr/lib/firewalld/services/freeipa-ldaps.xml dns /usr/lib/firewalld/services/dns.xml Opening the Required Ports Make sure the firewalld service is running. To find out if firewalld is currently running: To start firewalld and configure it to start automatically when the system boots: Open the required ports using the firewall-cmd utility. Choose one of the following options: Add the individual ports to the firewall by using the firewall-cmd --add-port command. For example, to open the ports in the default zone: Add the firewalld services to the firewall by using the firewall-cmd --add-service command. For example, to open the ports in the default zone: For details on using firewall-cmd to open ports on a system, see the Modifying Settings in Runtime and Permanent Configuration using CLI in the Security Guide or the firewall-cmd (1) man page. Reload the firewall-cmd configuration to ensure that the change takes place immediately: Note that reloading firewalld on a system in production can cause DNS connection time outs. See also Modifying Settings in Runtime and Permanent Configuration using CLI in the Security Guide . If required, to avoid the risk of time outs and to make the changes persistent on the running system, use the --runtime-to-permanent option of the firewall-cmd command, for example: Optional. To verify that the ports are available now, use the nc , telnet , or nmap utilities to connect to a port or run a port scan. Note Note that you also have to open network-based firewalls for both incoming and outgoing traffic. | [
"lookup_family_order = ipv4_only",
"hostname server.example.com",
"ip addr show 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:1a:4a:10:4e:33 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1 /24 brd 192.0.2.255 scope global dynamic eth0 valid_lft 106694sec preferred_lft 106694sec inet6 2001:DB8::1111 /32 scope global dynamic valid_lft 2591521sec preferred_lft 604321sec inet6 fe80::56ee:75ff:fe2b:def6/64 scope link valid_lft forever preferred_lft forever",
"dig +short server.example.com A 192.0.2.1",
"dig +short server.example.com AAAA 2001:DB8::1111",
"dig +short -x 192.0.2.1 server.example.com",
"dig +short -x 2001:DB8::1111 server.example.com",
"dig +dnssec @ IP_address_of_the_DNS_forwarder . SOA",
";; ->>HEADER<<- opcode: QUERY, status: NOERROR , id: 48655 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 4096 ;; ANSWER SECTION: . 31679 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2015100701 1800 900 604800 86400 . 31679 IN RRSIG SOA 8 0 86400 20151017170000 20151007160000 62530 . GNVz7SQs [...]",
"127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.0.2.1 server.example.com server 2001:DB8::1111 server.example.com server",
"systemctl status firewalld.service",
"systemctl start firewalld.service systemctl enable firewalld.service",
"firewall-cmd --permanent --add-port={80/tcp,443/tcp, list_of_ports }",
"firewall-cmd --permanent --add-service={freeipa-ldap, list_of_services }",
"firewall-cmd --reload",
"firewall-cmd --runtime-to-permanent --add-port={80/tcp,443/tcp,389/tcp,636/tcp,88/tcp,88/udp,464/tcp,464/udp,53/tcp,53/udp,123/udp}"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/installing-ipa |
Chapter 21. Guest Virtual Machine Disk Access with Offline Tools | Chapter 21. Guest Virtual Machine Disk Access with Offline Tools 21.1. Introduction Red Hat Enterprise Linux 7 provides a number of libguestfs utilities that enable accessing, editing, and creating guest virtual machine disks or other disk images. There are multiple uses for these tools, including: Viewing or downloading files located on a guest virtual machine disk. Editing or uploading files on a guest virtual machine disk. Reading or writing guest virtual machine configuration. Preparing new disk images containing files, directories, file systems, partitions, logical volumes and other options. Rescuing and repairing guest virtual machines that fail to boot or those that need boot configuration changes. Monitoring disk usage of guest virtual machines. Auditing compliance of guest virtual machines, for example to organizational security standards. Deploying guest virtual machines by cloning and modifying templates. Reading CD and DVD ISO images and floppy disk images. Warning You must never use the utilities listed in this chapter to write to a guest virtual machine or disk image that is attached to a running virtual machine, not even to open such a disk image in write mode. Doing so will result in disk corruption of the guest virtual machine. The tools try to prevent you from doing this, but do not secure all cases. If there is any suspicion that a guest virtual machine might be running, Red Hat strongly recommends not using the utilities. For increased safety, certain utilities can be used in read-only mode (using the --ro option), which does not save the changes. Note The primary source for documentation for libguestfs and the related utilities are the Linux man pages. The API is documented in guestfs(3) , guestfish is documented in guestfish(1) , and the virtualization utilities are documented in their own man pages (such as virt-df(1) ). For troubleshooting information, see Section A.17, "libguestfs Troubleshooting" 21.1.1. Caution about Using Remote Connections Some virtualization commands in Red Hat Enterprise Linux 7 allow you to specify a remote libvirt connection. For example: However, libguestfs utilities in Red Hat Enterprise Linux 7 cannot access the disks of remote libvirt guests, and commands using remote URLs as shown above do not work as expected. Nevertheless, beginning with Red Hat Enterprise Linux 7, libguestfs can access remote disk sources over network block device (NBD). You can export a disk image from a remote machine using the qemu-nbd command, and access it using a nbd:// URL. You may need to open a port on your firewall (port 10809) as shown here: On the remote system: qemu-nbd -t disk.img On the local system: virt-df -a nbd://remote The following libguestfs commands are affected: guestfish guestmount virt-alignment-scan virt-cat virt-copy-in virt-copy-out virt-df virt-edit virt-filesystems virt-inspector virt-ls virt-rescue virt-sysprep virt-tar-in virt-tar-out virt-win-reg | [
"virt-df -c qemu:// remote/system -d Guest"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-guest_virtual_machine_disk_access_with_offline_tools |
Preface | Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/pr01 |
5.5.2.3. Tracking of File Creation, Access, Modification Times | 5.5.2.3. Tracking of File Creation, Access, Modification Times Most file systems keep track of the time at which a file was created; some also track modification and access times. Over and above the convenience of being able to determine when a given file was created, accessed, or modified, these dates are vital for the proper operation of incremental backups. More information on how backups make use of these file system features can be found in Section 8.2, "Backups" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-storage-usable-fs-times |
32.2.3. Configuring kdump on the Command Line | 32.2.3. Configuring kdump on the Command Line Configuring the Memory Usage Memory reserved for the kdump kernel is always reserved during system boot, which means that the amount of memory is specified in the system's boot loader configuration. This section will explain how to change the amount of reserved memory on AMD64 and Intel 64 systems and IBM Power Systems servers using the GRUB boot loader, and on IBM System z using zipl . To configure the amount of memory to be reserved for the kdump kernel, edit the /boot/grub/grub.conf file and add crashkernel= <size> M or crashkernel=auto to the list of kernel options as shown in Example 32.1, "A sample /boot/grub/grub.conf file" . Note that the crashkernel=auto option only reserves the memory if the physical memory of the system is equal to or greater than: 2 GB on 32-bit and 64-bit x86 architectures; 2 GB on PowerPC if the page size is 4 KB, or 8 GB otherwise; 4 GB on IBM S/390 . Example 32.1. A sample /boot/grub/grub.conf file Important This section is available only if the system has enough memory. To learn about minimum memory requirements of the Red Hat Enterprise Linux 6 system, read the Required minimums section of the Red Hat Enterprise Linux Technology Capabilities and Limits comparison chart. When the kdump crash recovery is enabled, the minimum memory requirements increase by the amount of memory reserved for it. This value is determined by the user, and defaults to 128 MB plus 64 MB for each TB of physical memory (that is, a total of 192 MB for a system with 1 TB of physical memory). The memory can be attempted up to the maximum of 896 MB if required. This is recommended especially in large environments, for example in systems with a large number of Logical Unit Numbers (LUNs). Configuring the Target Type When a kernel crash is captured, the core dump can be either stored as a file in a local file system, written directly to a device, or sent over a network using the NFS (Network File System) or SSH (Secure Shell) protocol. Only one of these options can be set at the moment, and the default option is to store the vmcore file in the /var/crash/ directory of the local file system. To change this, as root , open the /etc/kdump.conf configuration file in a text editor and edit the options as described below. To change the local directory in which the core dump is to be saved, remove the hash sign ( " # " ) from the beginning of the #path /var/crash line, and replace the value with a desired directory path. Optionally, if you want to write the file to a different partition, follow the same procedure with the #ext4 /dev/sda3 line as well, and change both the file system type and the device (a device name, a file system label, and UUID are all supported) accordingly. For example: To write the dump directly to a device, remove the hash sign ( " # " ) from the beginning of the #raw /dev/sda5 line, and replace the value with a desired device name. For example: To store the dump to a remote machine using the NFS protocol, remove the hash sign ( " # " ) from the beginning of the #net my.server.com:/export/tmp line, and replace the value with a valid host name and directory path. For example: To store the dump to a remote machine using the SSH protocol, remove the hash sign ( " # " ) from the beginning of the #net [email protected] line, and replace the value with a valid user name and host name. For example: See Chapter 14, OpenSSH for information on how to configure an SSH server, and how to set up a key-based authentication. For a complete list of currently supported targets, see Table 32.1, "Supported kdump targets" . Note When using Direct-Access Storage Devices (DASDs) as the kdump target, the devices must be specified in the /etc/dasd.conf file with other DASDs, for example: Where 0.0.2298 and 0.0.2398 are the DASDs used as the kdump target. Similarly, when using FCP-attached Small Computer System Interface (SCSI) disks as the kdump target, the disks must be specified in the /etc/zfcp.conf file with other FCP-Attached SCSI disks, for example: Where 0.0.3d0c 0x500507630508c1ae 0x402424ab00000000 and 0.0.3d0c 0x500507630508c1ae 0x402424ac00000000 are the FCP-attached SCSI disks used as the kdump target. See the Adding DASDs and Adding FCP-Attached Logical Units (LUNs) chapters in the Installation Guide for Red Hat Enterprise Linux 6 for detailed information about configuring DASDs and FCP-attached SCSI disks. Important When transferring a core file to a remote target over SSH, the core file needs to be serialized for the transfer. This creates a vmcore.flat file in the /var/crash/ directory on the target system, which is unreadable by the crash utility. To convert vmcore.flat to a dump file that is readable by crash , run the following command as root on the target system: Configuring the Core Collector To reduce the size of the vmcore dump file, kdump allows you to specify an external application (that is, a core collector) to compress the data, and optionally leave out all irrelevant information. Currently, the only fully supported core collector is makedumpfile . To enable the core collector, as root , open the /etc/kdump.conf configuration file in a text editor, remove the hash sign ( " # " ) from the beginning of the #core_collector makedumpfile -c --message-level 1 -d 31 line, and edit the command-line options as described below. To enable the dump file compression, add the -c parameter. For example: To remove certain pages from the dump, add the -d value parameter, where value is a sum of values of pages you want to omit as described in Table 32.2, "Supported filtering levels" . For example, to remove both zero and free pages, use the following: See the manual page for makedumpfile for a complete list of available options. Table 32.2. Supported filtering levels Option Description 1 Zero pages 2 Cache pages 4 Cache private 8 User pages 16 Free pages Changing the Default Action With Red Hat Enterprise Linux 6.0, up to, and including version 6.2, the default action when kdump fails to create a core dump, the root file system is mounted and /sbin/init is run. From Red Hat Enterprise Linux 6.3 onwards, the default behavior is to reboot the machine. This change was necessary to ensure that kdump could operate reliably using less reserved memory. To allow the behavior, the mount_root_run_init option has been added to Table 32.3, "Supported actions" . To change the default behavior, as root , open the /etc/kdump.conf configuration file in a text editor, remove the hash sign ( " # " ) from the beginning of the #default shell line, and replace the value with a desired action as described in Table 32.3, "Supported actions" . Table 32.3. Supported actions Option Description reboot Reboot the system, losing the core in the process. halt Halt the system. poweroff Power off the system. shell Run the msh session from within the initramfs, allowing a user to record the core manually. mount_root_run_init Enable the default failback behavior from Red Hat Enterprise Linux 6.2 and earlier. For example: Enabling the Service To start the kdump daemon at boot time, type the following at a shell prompt as root : chkconfig kdump on This will enable the service for runlevels 2 , 3 , 4 , and 5 . Similarly, typing chkconfig kdump off will disable it for all runlevels. To start the service in the current session, use the following command as root : service kdump start For more information on runlevels and configuring services in general, see Chapter 12, Services and Daemons . | [
"grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/sda3 # initrd /initrd #boot=/dev/sda default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Red Hat Enterprise Linux Server (2.6.32-220.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-220.el6.x86_64 ro root=/dev/sda3 crashkernel=128M initrd /initramfs-2.6.32-220.el6.x86_64.img",
"ext3 /dev/sda4 path /usr/local/cores",
"raw /dev/sdb1",
"net penguin.example.com:/export/cores",
"net [email protected]",
"0.0.2098 0.0.2198 0.0.2298 0.0.2398",
"0.0.3d0c 0x500507630508c1ae 0x402424aa00000000 0.0.3d0c 0x500507630508c1ae 0x402424ab00000000 0.0.3d0c 0x500507630508c1ae 0x402424ac00000000",
"~]# /usr/sbin/makedumpfile -R */tmp/vmcore-rearranged* < *vmcore.flat*",
"core_collector makedumpfile -c",
"core_collector makedumpfile -d 17 -c",
"default halt"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-kdump-configuration-cli |
Chapter 8. Quota management architecture | Chapter 8. Quota management architecture With the quota management feature enabled, individual blob sizes are summed at the repository and namespace level. For example, if two tags in the same repository reference the same blob, the size of that blob is only counted once towards the repository total. Additionally, manifest list totals are counted toward the repository total. Important Because manifest list totals are counted toward the repository total, the total quota consumed when upgrading from a version of Red Hat Quay might be reportedly differently in Red Hat Quay 3.9. In some cases, the new total might go over a repository's previously-set limit. Red Hat Quay administrators might have to adjust the allotted quota of a repository to account for these changes. The quota management feature works by calculating the size of existing repositories and namespace with a backfill worker, and then adding or subtracting from the total for every image that is pushed or garbage collected afterwords. Additionally, the subtraction from the total happens when the manifest is garbage collected. Note Because subtraction occurs from the total when the manifest is garbage collected, there is a delay in the size calculation until it is able to be garbage collected. For more information about garbage collection, see Red Hat Quay garbage collection . The following database tables hold the quota repository size, quota namespace size, and quota registry size, in bytes, of a Red Hat Quay repository within an organization: QuotaRepositorySize QuotaNameSpaceSize QuotaRegistrySize The organization size is calculated by the backfill worker to ensure that it is not duplicated. When an image push is initialized, the user's organization storage is validated to check if it is beyond the configured quota limits. If an image push exceeds defined quota limitations, a soft or hard check occurs: For a soft check, users are notified. For a hard check, the push is stopped. If storage consumption is within configured quota limits, the push is allowed to proceed. Image manifest deletion follows a similar flow, whereby the links between associated image tags and the manifest are deleted. Additionally, after the image manifest is deleted, the repository size is recalculated and updated in the QuotaRepositorySize , QuotaNameSpaceSize , and QuotaRegistrySize tables. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/red_hat_quay_architecture/quota-management-arch |
23.3. Configuring a DHCP Client | 23.3. Configuring a DHCP Client The first step for configuring a DHCP client is to make sure the kernel recognizes the network interface card. Most cards are recognized during the installation process and the system is configured to use the correct kernel module for the card. If a card is added after installation, Kudzu [6] should recognize it and prompt for the configuration of the corresponding kernel module for it. Be sure to check the Hardware Compatibility List available at http://hardware.redhat.com/hcl/ . If the network card is not configured by the installation program or Kudzu and you know which kernel module to load for it, refer to Chapter 37, Kernel Modules for details on loading kernel modules. To configure a DHCP client manually, modify the /etc/sysconfig/network file to enable networking and the configuration file for each network device in the /etc/sysconfig/network-scripts directory. In this directory, each device should have a configuration file named ifcfg-eth0 , where eth0 is the network device name. The /etc/sysconfig/network file should contain the following line: The NETWORKING variable must be set to yes if you want networking to start at boot time. The /etc/sysconfig/network-scripts/ifcfg-eth0 file should contain the following lines: A configuration file is needed for each device to be configured to use DHCP. Other options for the network script include: DHCP_HOSTNAME - Only use this option if the DHCP server requires the client to specify a hostname before receiving an IP address. (The DHCP server daemon in Red Hat Enterprise Linux does not support this feature.) PEERDNS= <answer> , where <answer> is one of the following: yes - Modify /etc/resolv.conf with information from the server. If using DHCP, then yes is the default. no - Do not modify /etc/resolv.conf . SRCADDR= <address> , where <address> is the specified source IP address for outgoing packets. USERCTL= <answer> , where <answer> is one of the following: yes - Non-root users are allowed to control this device. no - Non-root users are not allowed to control this device. If you prefer using a graphical interface, refer to Chapter 17, Network Configuration for details on using the Network Administration Tool to configure a network interface to use DHCP. Note For advanced configurations of client DHCP options such as protocol timing, lease requirements and requests, dynamic DNS support, aliases, as well as a wide variety of values to override, prepend, or append to client-side configurations, refer to the dhclient and dhclient.conf man pages. [6] Kudzu is a hardware probing tool run at system boot time to determine what hardware has been added or removed from the system. | [
"NETWORKING=yes",
"DEVICE=eth0 BOOTPROTO=dhcp ONBOOT=yes"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Dynamic_Host_Configuration_Protocol_DHCP-Configuring_a_DHCP_Client |
Chapter 8. StorageState [migration.k8s.io/v1alpha1] | Chapter 8. StorageState [migration.k8s.io/v1alpha1] Description The state of the storage of a specific resource. Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the storage state. status object Status of the storage state. 8.1.1. .spec Description Specification of the storage state. Type object Property Type Description resource object The resource this storageState is about. 8.1.2. .spec.resource Description The resource this storageState is about. Type object Property Type Description group string The name of the group. resource string The name of the resource. 8.1.3. .status Description Status of the storage state. Type object Property Type Description currentStorageVersionHash string The hash value of the current storage version, as shown in the discovery document served by the API server. Storage Version is the version to which objects are converted to before persisted. lastHeartbeatTime string LastHeartbeatTime is the last time the storage migration triggering controller checks the storage version hash of this resource in the discovery document and updates this field. persistedStorageVersionHashes array (string) The hash values of storage versions that persisted instances of spec.resource might still be encoded in. "Unknown" is a valid value in the list, and is the default value. It is not safe to upgrade or downgrade to an apiserver binary that does not support all versions listed in this field, or if "Unknown" is listed. Once the storage version migration for this resource has completed, the value of this field is refined to only contain the currentStorageVersionHash. Once the apiserver has changed the storage version, the new storage version is appended to the list. 8.2. API endpoints The following API endpoints are available: /apis/migration.k8s.io/v1alpha1/storagestates DELETE : delete collection of StorageState GET : list objects of kind StorageState POST : create a StorageState /apis/migration.k8s.io/v1alpha1/storagestates/{name} DELETE : delete a StorageState GET : read the specified StorageState PATCH : partially update the specified StorageState PUT : replace the specified StorageState /apis/migration.k8s.io/v1alpha1/storagestates/{name}/status GET : read status of the specified StorageState PATCH : partially update status of the specified StorageState PUT : replace status of the specified StorageState 8.2.1. /apis/migration.k8s.io/v1alpha1/storagestates HTTP method DELETE Description delete collection of StorageState Table 8.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind StorageState Table 8.2. HTTP responses HTTP code Reponse body 200 - OK StorageStateList schema 401 - Unauthorized Empty HTTP method POST Description create a StorageState Table 8.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.4. Body parameters Parameter Type Description body StorageState schema Table 8.5. HTTP responses HTTP code Reponse body 200 - OK StorageState schema 201 - Created StorageState schema 202 - Accepted StorageState schema 401 - Unauthorized Empty 8.2.2. /apis/migration.k8s.io/v1alpha1/storagestates/{name} Table 8.6. Global path parameters Parameter Type Description name string name of the StorageState HTTP method DELETE Description delete a StorageState Table 8.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified StorageState Table 8.9. HTTP responses HTTP code Reponse body 200 - OK StorageState schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified StorageState Table 8.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK StorageState schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified StorageState Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body StorageState schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK StorageState schema 201 - Created StorageState schema 401 - Unauthorized Empty 8.2.3. /apis/migration.k8s.io/v1alpha1/storagestates/{name}/status Table 8.15. Global path parameters Parameter Type Description name string name of the StorageState HTTP method GET Description read status of the specified StorageState Table 8.16. HTTP responses HTTP code Reponse body 200 - OK StorageState schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified StorageState Table 8.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.18. HTTP responses HTTP code Reponse body 200 - OK StorageState schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified StorageState Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.20. Body parameters Parameter Type Description body StorageState schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK StorageState schema 201 - Created StorageState schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/storage_apis/storagestate-migration-k8s-io-v1alpha1 |
Chapter 3. Deploy standalone Multicloud Object Gateway | Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. After deploying the MCG component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator. Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.3. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway (MCG) component while deploying OpenShift Data Foundation. After you create the MCG component, you can create and manage buckets using the MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Create a new StorageClass using the local storage devices option. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Filesystem is selected by default. Always ensure that the Filesystem is selected for Volume Mode . Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) | [
"oc annotate namespace openshift-storage openshift.io/node-selector="
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/deploy-standalone-multicloud-object-gateway |
Chapter 6. Software management | Chapter 6. Software management 6.1. Notable changes to the YUM stack 6.1.1. Package management with YUM/DNF On Red Hat Enterprise Linux 8, installing software is ensured by the YUM tool, which is based on the DNF technology ( YUM v4 ). We deliberately adhere to usage of the yum term for consistency with major versions of RHEL. However, if you type dnf instead of yum , the command works as expected because yum is an alias to dnf for compatibility. For more details, see Installing, managing, and removing user-space components . 6.1.2. Advantages of YUM v4 over YUM v3 YUM v4 has the following advantages over the YUM v3 used on RHEL 7: Increased performance Support for modular content Well-designed stable API for integration with tooling For detailed information about differences between the new YUM v4 tool and the version YUM v3 from RHEL 7, see Changes in DNF CLI compared to YUM . 6.1.3. How to use YUM v4 Installing software YUM v4 is compatible with YUM v3 when using from the command line, editing or creating configuration files. For installing software, you can use the yum command and its particular options in the same way as on RHEL 7. See more detailed information about Installing software packages . Availability of plug-ins Legacy YUM v3 plug-ins are incompatible with the new version of YUM v4 . Selected yum plug-ins and utilities have been ported to the new DNF back end, and can be installed under the same names as in RHEL 7. They also provide compatibility symlinks, so the binaries, configuration files and directories can be found in usual locations. In the event that a plug-in is no longer included, or a replacement does not meet a usability need, please reach out to Red Hat Support to request a Feature Enhancement as described in How do I open and manage a support case on the Customer Portal? For more information, see Plugin Interface . Availability of APIs Note that the legacy Python API provided by YUM v3 is no longer available. Users are advised to migrate their plug-ins and scripts to the new API provided by YUM v4 (DNF Python API), which is stable and fully supported. The upstream project documents the new DNF Python API - see the DNF API Reference . The Libdnf and Hawkey APIs (both C and Python) are to be considered unstable, and will likely change during RHEL 8 life cycle. 6.1.4. Availability of YUM configuration file options The changes in configuration file options between RHEL 7 and RHEL 8 for the /etc/yum.conf and /etc/yum.repos.d/*.repo files are documented in the following summary. Table 6.1. Changes in configuration file options for the /etc/yum.conf file RHEL 7 option RHEL 8 status alwaysprompt removed assumeno available assumeyes available autocheck_running_kernel available autosavets removed bandwidth available bugtracker_url available cachedir available check_config_file_age available clean_requirements_on_remove available color available color_list_available_downgrade available color_list_available_install available color_list_available_reinstall available color_list_available_running_kernel removed color_list_available_upgrade available color_list_installed_extra available color_list_installed_newer available color_list_installed_older available color_list_installed_reinstall available color_list_installed_running_kernel removed color_search_match available color_update_installed available color_update_local available color_update_remote available commands removed config_file_path available debuglevel available deltarpm available deltarpm_metadata_percentage removed deltarpm_percentage available depsolve_loop_limit removed disable_excludes available diskspacecheck available distroverpkg removed enable_group_conditionals removed errorlevel available exactarchlist removed exclude available exit_on_lock available fssnap_abort_on_errors removed fssnap_automatic_keep removed fssnap_automatic_post removed fssnap_automatic_pre removed fssnap_devices removed fssnap_percentage removed ftp_disable_epsv removed gpgcheck available group_command removed group_package_types available groupremove_leaf_only removed history_list_view available history_record available history_record_packages available http_caching removed include removed installonly_limit available installonlypkgs available installrootkeep removed ip_resolve available keepalive removed keepcache available kernelpkgnames removed loadts_ignoremissing removed loadts_ignorenewrpm removed loadts_ignorerpm removed localpkg_gpgcheck available logfile removed max_connections removed mddownloadpolicy removed mdpolicy removed metadata_expire available metadata_expire_filter removed minrate available mirrorlist_expire removed multilib_policy available obsoletes available override_install_langs removed overwrite_groups removed password available payload_gpgcheck removed persistdir available pluginconfpath available pluginpath available plugins available protected_multilib removed protected_packages available proxy available proxy_password available proxy_username available query_install_excludes removed recent available recheck_installed_requires removed remove_leaf_only removed repo_gpgcheck available repopkgsremove_leaf_only removed reposdir available reset_nice available retries available rpmverbosity available shell_exit_status removed showdupesfromrepos available skip_broken available skip_missing_names_on_install removed skip_missing_names_on_update removed ssl_check_cert_permissions removed sslcacert available sslclientcert available sslclientkey available sslverify available syslog_device removed syslog_facility removed syslog_ident removed throttle available timeout available tolerant removed tsflags available ui_repoid_vars removed upgrade_group_objects_upgrade available upgrade_requirements_on_install removed usercache removed username available usr_w_check removed Table 6.2. Changes in configuration file options for the /etc/yum.repos.d/*.repo file RHEL 7 option RHEL 8 status async removed bandwidth available baseurl available compare_providers_priority removed cost available deltarpm_metadata_percentage removed deltarpm_percentage available enabled available enablegroups available exclude available failovermethod removed ftp_disable_epsv removed gpgcakey removed gpgcheck available gpgkey available http_caching removed includepkgs available ip_resolve available keepalive removed metadata_expire available metadata_expire_filter removed metalink available mirrorlist available mirrorlist_expire removed name available password available proxy available proxy_password available proxy_username available repo_gpgcheck available repositoryid removed retries available skip_if_unavailable available ssl_check_cert_permissions removed sslcacert available sslclientcert available sslclientkey available sslverify available throttle available timeout available ui_repoid_vars removed username available 6.1.5. YUM v4 features behaving differently Some of the YUM v3 features may behave differently in YUM v4 . If any such change negatively impacts your workflows, please open a case with Red Hat Support, as described in How do I open and manage a support case on the Customer Portal? 6.1.5.1. yum list presents duplicate entries When listing packages using the yum list command, duplicate entries may be presented, one for each repository where a package of the same name and version resides. This is intentional, and it allows the users to distinguish such packages when necessary. For example, if package-1.2 is available in both repo1 and repo2, YUM v4 will print both instances: By contrast, the legacy YUM v3 command filtered out such duplicates so that only one instance was shown: 6.1.6. Changes in the transaction history log files The changes in the transaction history log files between RHEL 7 and RHEL 8 are documented in the following summary. In RHEL 7, the /var/log/yum.log file stores: Registry of installations, updates, and removals of the software packages Transactions from yum and PackageKit In RHEL 8, there is no direct equivalent to the /var/log/yum.log file. To display the information about the transactions, including the PackageKit and microdnf , use the yum history command. Alternatively, you can search the /var/log/dnf.rpm.log file, but this log file does not include the transactions from PackageKit and microdnf, and it has a log rotation which provides the periodic removal of the stored information. 6.1.7. The deltarpm functionality is no longer supported RHEL 8 no longer supports the use of delta rpms . To utilize delta rpms , a user must install the deltarpm package which is no longer available. The deltarpm replacement, drpm , does not provide the same functionality. Thus, the RHEL 8 content is not delivered in the deltarpm format. Note that this functionality will be completely removed in future RHEL releases. 6.2. Notable RPM features and changes Red Hat Enterprise Linux (RHEL) 8 is distributed with RPM 4.14. This version introduces many enhancements over RPM 4.11, which is available in RHEL 7. Notable features include: The debuginfo packages can be installed in parallel Support for weak dependencies Support for rich or boolean dependencies Support for packaging files above 4 GB in size Support for file triggers New --nopretrans and --noposttrans switches to disable the execution of the %pretrans and %posttrans scriptlets respectively. New --noplugins switch to disable loading and execution of all RPM plug-ins. New syslog plug-in for logging any RPM activity by the System Logging protocol (syslog). The rpmbuild command can now do all build steps from a source package directly. This is possible if rpmbuild is used with any of the -r[abpcils] options. Support for the reinstall mode. This is ensured by the new --reinstall option. To reinstall a previously installed package, use the syntax below: This option ensures a proper installation of the new package and removal of the old package. Support for SSD conservation mode. This is ensured by the new %_minimize_writes macro, which is available in the /usr/lib/rpm/macros file. The macro is by default set to 0. To minimize writing to SSD disks, set %_minimize_writes to 1. New rpm2archive utility for converting rpm payload to tar archives See more information about New RPM features in RHEL 8 . Notable changes include: Stricter spec-parser Simplified signature checking the output in non-verbose mode Improved support for reproducible builds (builds that create an identical package): Setting build time Setting file mtime (file modification time) Setting buildhost Using the -p option to query an uninstalled PACKAGE_FILE is now optional. For this use case, the rpm command now returns the same result with or without the -p option. The only use case where the -p option is necessary is to verify that the file name does not match any Provides in the rpmdb database. Additions and deprecations in macros The %makeinstall macro has been deprecated. To install a program, use the %make_install macro instead. The rpmbuild --sign command has been deprecated. Note that using the --sign option with the rpmbuild command has been deprecated. To add a signature to an already existing package, use rpm --addsign instead. | [
"[... ] package-1.2 repo1 package-1.2 repo2 [... ]",
"[... ] package-1.2 repo1 [... ]",
"{--reinstall} [install-options] PACKAGE_FILE"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/software-management_considerations-in-adopting-rhel-8 |
Chapter 3. Getting started | Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites You must complete the installation procedure for your environment. You must have an AMQP 1.0 message broker listening for connections on interface localhost and port 5672 . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named examples . For more information, see Creating a queue . 3.2. Running Hello World on Red Hat Enterprise Linux The Hello World example creates a connection to the broker, sends a message containing a greeting to the examples queue, and receives it back. On success, it prints the received message to the console. Procedure Copy the examples to a location of your choosing. USD cp -r /usr/share/proton/examples/cpp cpp-examples Create a build directory and change to that directory: USD mkdir cpp-examples/bld USD cd cpp-examples/bld Use cmake to configure the build and use make to compile the examples. USD cmake .. USD make Run the helloworld program. USD ./helloworld Hello World! | [
"cp -r /usr/share/proton/examples/cpp cpp-examples",
"mkdir cpp-examples/bld cd cpp-examples/bld",
"cmake .. make",
"./helloworld Hello World!"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_cpp_client/getting_started |
Chapter 3. Installing the Cluster Observability Operator | Chapter 3. Installing the Cluster Observability Operator As a cluster administrator, you can install or remove the Cluster Observability Operator (COO) from OperatorHub by using the OpenShift Container Platform web console. OperatorHub is a user interface that works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. 3.1. Installing the Cluster Observability Operator in the web console Install the Cluster Observability Operator (COO) from OperatorHub by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have logged in to the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Type cluster observability operator in the Filter by keyword box. Click Cluster Observability Operator in the list of results. Read the information about the Operator, and configure the following installation settings: Update channel stable Version 1.0.0 or later Installation mode All namespaces on the cluster (default) Installed Namespace Operator recommended Namespace: openshift-cluster-observability-operator Select Enable Operator recommended cluster monitoring on this Namespace Update approval Automatic Optional: You can change the installation settings to suit your requirements. For example, you can select to subscribe to a different update channel, to install an older released version of the Operator, or to require manual approval for updates to new versions of the Operator. Click Install . Verification Go to Operators Installed Operators , and verify that the Cluster Observability Operator entry appears in the list. Additional resources Adding Operators to a cluster 3.2. Uninstalling the Cluster Observability Operator using the web console If you have installed the Cluster Observability Operator (COO) by using OperatorHub, you can uninstall it in the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have logged in to the OpenShift Container Platform web console. Procedure Go to Operators Installed Operators . Locate the Cluster Observability Operator entry in the list. Click for this entry and select Uninstall Operator . Verification Go to Operators Installed Operators , and verify that the Cluster Observability Operator entry no longer appears in the list. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/cluster_observability_operator/installing-cluster-observability-operators |
10.5.42. Redirect | 10.5.42. Redirect When a webpage is moved, Redirect can be used to map the file location to a new URL. The format is as follows: In this example, replace <old-path> with the old path information for <file-name> and <current-domain> and <current-path> with the current domain and path information for <file-name> . In this example, any requests for <file-name> at the old location is automatically redirected to the new location. For more advanced redirection techniques, use the mod_rewrite module included with the Apache HTTP Server. For more information about configuring the mod_rewrite module, refer to the Apache Software Foundation documentation online at http://httpd.apache.org/docs-2.0/mod/mod_rewrite.html . | [
"Redirect / <old-path> / <file-name> http:// <current-domain> / <current-path> / <file-name>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-redirect |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/performing_disaster_recovery_with_identity_management/proc_providing-feedback-on-red-hat-documentation_performing-disaster-recovery |
Using JBoss EAP XP 4.0.0 | Using JBoss EAP XP 4.0.0 Red Hat JBoss Enterprise Application Platform 7.4 For Use with JBoss EAP XP 4.0.0 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_4.0.0/index |
Chapter 2. GNU Compiler Collection (GCC) | Chapter 2. GNU Compiler Collection (GCC) The GNU Compiler Collection , commonly abbreviated GCC , is a portable compiler suite with support for a wide selection of programming languages. Red Hat Developer Toolset is distributed with GCC 12.2.1 . This version is more recent than the version included in Red Hat Enterprise Linux and provides a number of bug fixes and enhancements. 2.1. GNU C Compiler 2.1.1. Installing the C Compiler In Red Hat Developer Toolset, the GNU C compiler is provided by the devtoolset-12-gcc package and is automatically installed with devtoolset-12-toolchain as described in Section 1.5, "Installing Red Hat Developer Toolset" . 2.1.2. Using the C Compiler To compile a C program on the command line, run the gcc compiler as follows: This creates a binary file named output_file in the current working directory. If the -o option is omitted, the compiler creates a file named a.out by default. When you are working on a project that consists of several source files, it is common to compile an object file for each of the source files first and then link these object files together. This way, when you change a single source file, you can recompile only this file without having to compile the entire project. To compile an object file on the command line,: This creates an object file named object_file . If the -o option is omitted, the compiler creates a file named after the source file with the .o file extension. To link object files together and create a binary file: Note that you can execute any command using the scl utility, causing it to be run with the Red Hat Developer Toolset binaries used in preference to the Red Hat Enterprise Linux system equivalent. This allows you to run a shell session with Red Hat Developer Toolset gcc as default: Note To verify the version of gcc you are using at any point: Red Hat Developer Toolset's gcc executable path will begin with /opt . Alternatively, you can use the following command to confirm that the version number matches that for Red Hat Developer Toolset gcc : Example 2.1. Compiling a C Program on the Command Line Consider a source file named hello.c with the following contents: #include <stdio.h> int main(int argc, char *argv[]) { printf("Hello, World!\n"); return 0; } Compile this source code on the command line by using the gcc compiler from Red Hat Developer Toolset: This creates a new binary file called hello in the current working directory. 2.1.3. Running a C Program When gcc compiles a program, it creates an executable binary file. To run this program on the command line, change to the directory with the executable file and run it: Example 2.2. Running a C Program on the Command Line Assuming that you have successfully compiled the hello binary file as shown in Example 2.1, "Compiling a C Program on the Command Line" , you can run it by typing the following at a shell prompt: 2.2. GNU C++ Compiler 2.2.1. Installing the C++ Compiler In Red Hat Developer Toolset, the GNU C++ compiler is provided by the devtoolset-12-gcc-c++ package and is automatically installed with the devtoolset-12-toolchain package as described in Section 1.5, "Installing Red Hat Developer Toolset" . 2.2.2. Using the C++ Compiler To compile a C++ program on the command line, run the g++ compiler as follows: This creates a binary file named output_file in the current working directory. If the -o option is omitted, the g++ compiler creates a file named a.out by default. When you are working on a project that consists of several source files, it is common to compile an object file for each of the source files first and then link these object files together. This way, when you change a single source file, you can recompile only this file without having to compile the entire project. To compile an object file on the command line: This creates an object file named object_file . If the -o option is omitted, the g++ compiler creates a file named after the source file with the .o file extension. To link object files together and create a binary file: Note that you can execute any command using the scl utility, causing it to be run with the Red Hat Developer Toolset binaries used in preference to the Red Hat Enterprise Linux system equivalent. This allows you to run a shell session with Red Hat Developer Toolset g++ as default: Note To verify the version of g++ you are using at any point: Red Hat Developer Toolset's g++ executable path will begin with /opt . Alternatively, you can use the following command to confirm that the version number matches that for Red Hat Developer Toolset g++ : Example 2.3. Compiling a C++ Program on the Command Line Consider a source file named hello.cpp with the following contents: #include <iostream> using namespace std; int main(int argc, char *argv[]) { cout << "Hello, World!" << endl; return 0; } Compile this source code on the command line by using the g++ compiler from Red Hat Developer Toolset: This creates a new binary file called hello in the current working directory. 2.2.3. Running a C++ Program When g++ compiles a program, it creates an executable binary file. To run this program on the command line, change to the directory with the executable file and run it: Example 2.4. Running a C++ Program on the Command Line Assuming that you have successfully compiled the hello binary file as shown in Example 2.3, "Compiling a C++ Program on the Command Line" , you can run it: 2.2.4. C++ Compatibility All compilers from Red Hat Enterprise Linux versions 5, 6, and 7 and from Red Hat Developer Toolset versions 1 to 10 in any -std mode are compatible with any other of those compilers in C++98 mode. A compiler in C++11, C++14, or C++17 mode is only guaranteed to be compatible with another compiler in those same modes if they are from the same release series. Supported examples: C++11 and C++11 from Red Hat Developer Toolset 6.x C++14 and C++14 from Red Hat Developer Toolset 6.x C++17 and C++17 from Red Hat Developer Toolset 10.x Important The GCC compiler in Red Hat Developer Toolset 10.x can build code using C++20 but this capability is experimental and not supported by Red Hat. All compatibility information mentioned in this section is relevant only for Red Hat-supplied versions of the GCC C++ compiler. 2.2.4.1. C++ ABI Any C++98-compliant binaries or libraries built by the Red Hat Developer Toolset toolchain explicitly with -std=c++98 or -std=gnu++98 can be freely mixed with binaries and shared libraries built by the Red Hat Enterprise Linux 5, 6 or 7 system GCC. The default language standard setting for Red Hat Developer Toolset 12.1 is C++17 with GNU extensions, equivalent to explicitly using option -std=gnu++17 . Using the C++14 language version is supported in Red Hat Developer Toolset when all C++ objects compiled with the respective flag have been built using Red Hat Developer Toolset 6 or later. Objects compiled by the system GCC in its default mode of C++98 are also compatible, but objects compiled with the system GCC in C++11 or C++14 mode are not compatible. Starting with Red Hat Developer Toolset 10.x, using the C++17 language version is no longer experimental and is supported by Red Hat. All C++ objects compiled with C++17 must be built using Red Hat Developer Toolset 10.x or later. Important Use of C++11, C++14, and C++17 features in your application requires careful consideration of the above ABI compatibility information. The mixing of objects, binaries and libraries, built by the Red Hat Enterprise Linux 7 system toolchain GCC using the -std=c++0x or -std=gnu++0x flags, with those built with the C++11 or later language versions using the GCC in Red Hat Developer Toolset is explicitly not supported. Aside from the C++11, C++14, and C++17 ABI, discussed above, the Red Hat Enterprise Linux Application Compatibility Specification is unchanged for Red Hat Developer Toolset. When mixing objects built with Red Hat Developer Toolset with those built with the Red Hat Enterprise Linux 7 toolchain (particularly .o / .a files), the Red Hat Developer Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by Red Hat Developer Toolset are resolved at link-time. A new standard mangling for SIMD vector types has been added to avoid name clashes on systems with vectors of varying lengths. The compiler in Red Hat Developer Toolset uses the new mangling by default. It is possible to use the standard mangling by adding the -fabi-version=2 or -fabi-version=3 options to GCC C++ compiler calls. To display a warning about code that uses the old mangling, use the -Wabi option. On Red Hat Enterprise Linux 7, the GCC C++ compiler still uses the old mangling by default, but emits aliases with the new mangling on targets that support strong aliases. It is possible to use the new standard mangling by adding the -fabi-version=4 option to compiler calls. To display a warning about code that uses the old mangling, use the -Wabi option. 2.3. GNU Fortran Compiler 2.3.1. Installing the Fortran Compiler In Red Hat Developer Toolset, the GNU Fortran compiler is provided by the devtoolset-12-gcc-gfortran package and is automatically installed with devtoolset-12-toolchain as described in Section 1.5, "Installing Red Hat Developer Toolset" . 2.3.2. Using the Fortran Compiler To compile a Fortran program on the command line, run the gfortran compiler as follows: This creates a binary file named output_file in the current working directory. If the -o option is omitted, the compiler creates a file named a.out by default. When you are working on a project that consists of several source files, it is common to compile an object file for each of the source files first and then link these object files together. This way, when you change a single source file, you can recompile only this file without having to compile the entire project. To compile an object file on the command line: This creates an object file named object_file . If the -o option is omitted, the compiler creates a file named after the source file with the .o file extension. To link object files together and create a binary file: Note that you can execute any command using the scl utility, causing it to be run with the Red Hat Developer Toolset binaries used in preference to the Red Hat Enterprise Linux system equivalent. This allows you to run a shell session with Red Hat Developer Toolset gfortran as default: Note To verify the version of gfortran you are using at any point: Red Hat Developer Toolset's gfortran executable path will begin with /opt . Alternatively, you can use the following command to confirm that the version number matches that for Red Hat Developer Toolset gfortran : Example 2.5. Compiling a Fortran Program on the Command Line Consider a source file named hello.f with the following contents: program hello print *, "Hello, World!" end program hello Compile this source code on the command line by using the gfortran compiler from Red Hat Developer Toolset: This creates a new binary file called hello in the current working directory. 2.3.3. Running a Fortran Program When gfortran compiles a program, it creates an executable binary file. To run this program on the command line, change to the directory with the executable file and run it: Example 2.6. Running a Fortran Program on the Command Line Assuming that you have successfully compiled the hello binary file as shown in Example 2.5, "Compiling a Fortran Program on the Command Line" , you can run it: 2.4. Specifics of GCC in Red Hat Developer Toolset Static linking of libraries Certain more recent library features are statically linked into applications built with Red Hat Developer Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk as standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In Red Hat Developer Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from the Red Hat Developer Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC . 2.5. Additional Resources For more information about the GNU Compiler Collections and its features, see the resources listed below. Installed Documentation gcc (1) - The manual page for the gcc compiler provides detailed information on its usage; with few exceptions, g++ accepts the same command line options as gcc . To display the manual page for the version included in Red Hat Developer Toolset: gfortran (1) - The manual page for the gfortran compiler provides detailed information on its usage. To display the manual page for the version included in Red Hat Developer Toolset: C++ Standard Library Documentation - Documentation on the C++ standard library can be optionally installed: Once installed, HTML documentation is available at /opt/rh/devtoolset-12/root/usr/share/doc/devtoolset-12-libstdC++-docs-12.2.1/html/index.html . Online Documentation Red Hat Enterprise Linux 7 Developer Guide - The Developer Guide for Red Hat Enterprise Linux 7 provides in-depth information about GCC . Using the GNU Compiler Collection - The upstream GCC manual provides an in-depth description of the GNU compilers and their usage. The GNU C++ Library - The GNU C++ library documentation provides detailed information about the GNU implementation of the standard C++ library. The GNU Fortran Compiler - The GNU Fortran compiler documentation provides detailed information on gfortran 's usage. See Also Chapter 1, Red Hat Developer Toolset - An overview of Red Hat Developer Toolset and more information on how to install it on your system. Chapter 4, binutils - Instructions on using binutils , a collection of binary tools to inspect and manipulate object files and binaries. Chapter 5, elfutils - Instructions on using elfutils , a collection of binary tools to inspect and manipulate ELF files. Chapter 6, dwz - Instructions on using the dwz tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. Chapter 8, GNU Debugger (GDB) - Instructions on debugging programs written in C, C++, and Fortran. | [
"scl enable devtoolset-12 'gcc -o output_file source_file ...'",
"scl enable devtoolset-12 'gcc -o object_file -c source_file '",
"scl enable devtoolset-12 'gcc -o output_file object_file ...'",
"scl enable devtoolset-12 'bash'",
"which gcc",
"gcc -v",
"#include <stdio.h> int main(int argc, char *argv[]) { printf(\"Hello, World!\\n\"); return 0; }",
"scl enable devtoolset-12 'gcc -o hello hello.c'",
"./ file_name",
"./hello Hello, World!",
"scl enable devtoolset-12 'g++ -o output_file source_file ...'",
"scl enable devtoolset-12 'g++ -o object_file -c source_file '",
"scl enable devtoolset-12 'g++ -o output_file object_file ...'",
"scl enable devtoolset-12 'bash'",
"which g++",
"g++ -v",
"#include <iostream> using namespace std; int main(int argc, char *argv[]) { cout << \"Hello, World!\" << endl; return 0; }",
"scl enable devtoolset-12 'g++ -o hello hello.cpp'",
"./ file_name",
"./hello Hello, World!",
"scl enable devtoolset-12 'gfortran -o output_file source_file ...'",
"scl enable devtoolset-12 'gfortran -o object_file -c source_file '",
"scl enable devtoolset-12 'gfortran -o output_file object_file ...'",
"scl enable devtoolset-12 'bash'",
"which gfortran",
"gfortran -v",
"program hello print *, \"Hello, World!\" end program hello",
"scl enable devtoolset-12 'gfortran -o hello hello.f'",
"./ file_name",
"./hello Hello, World!",
"scl enable devtoolset-12 'gcc -lsomelib objfile.o'",
"scl enable devtoolset-12 'gcc objfile.o -lsomelib'",
"scl enable devtoolset-12 'man gcc'",
"scl enable devtoolset-12 'man gfortran'",
"yum install devtoolset-12-libstdc++-docs"
] | https://docs.redhat.com/en/documentation/red_hat_developer_toolset/12/html/user_guide/chap-GCC |
Appendix A. Revision History | Appendix A. Revision History Revision History Revision 0.0-1.10 Mon Aug 05 2019 Marie Dolezelova Version for 7.7 GA release. Revision 0.0-1.6 Wed Nov 11 2015 Jana Heves Version for 7.2 GA release. Revision 0.0-1.4 Thu Feb 19 2015 Radek Biba Version for 7.1 GA release. Linux Containers moved to a separate book. Revision 0.0-1.0 Mon Jul 21 2014 Peter Ondrejka Revision 0.0-0.14 Mon May 13 2013 Peter Ondrejka Version for 7.0 GA release | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/resource_management_guide/appe-resource_management_guide-revision_history |
5.178. luci | 5.178. luci 5.178.1. RHBA-2012:0766 - luci bug fix and enhancement update Updated luci packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The luci packages contain a web-based high-availability cluster configuration application. Bug Fixes BZ# 796731 A cluster configuration can define global resources (declared outside of cluster service groups) and in-line resources (declared inside a service group). The names of resources must be unique regardless of whether the resource is a global or an in-line resource. Previously, luci allowed a global resource with a name, which was already used by a resource that had been declared in-line, and a name of a resource within a service group with a name that was already used by a global resource. As a result, luci could terminate unexpectedly with error 500 or the cluster configuration could be modified improperly. With this update, luci fails gracefully under these circumstances and reports the problems, and the cluster configuration remains unmodified. BZ# 749668 If a cluster configuration was not valid, luci terminated unexpectedly without any report about the cluster configuration problem. It was therefore impossible to use luci for administration of clusters with invalid configurations. With this update, luci detects invalid configurations and returns warnings with information about possible mistakes along with proposed fixes. BZ# 690621 The user could not debug problems on-the-fly if debugging was not enabled prior to starting luci . This update adds new controls to allow the user to change the log level of messages generated by luci according to the message type while luci is running. BZ# 801491 When the user created a cluster resource with a name that contained a period symbol ( . ), luci failed to redirect the browser to the resource that was just created. As a result, error 500 was displayed, even though the resource was created correctly. This update corrects the code that handles redirection of the browser after creating such a resource and luci redirects the browser to a screen that displays the resource as expected. BZ# 744048 Previously, luci did not require any confirmation on removal of cluster services. Consequently, the user could remove the services by accident or without properly considering the consequences. With this update, luci displays a confirmation dialog when the user requests removal of cluster services, which informs the user about the consequences, and forces them to confirm their action. BZ# 733753 Since Red Hat Enterprise Linux 6.3 , authenticated sessions automatically expired after 15 minutes of inactivity. With this update, the user can now change the time-out period in the who.auth_tkt_timeout parameter in the /etc/sysconfig/luci file. BZ# 768406 Previously, the default value of the monitor_link attribute of the IP resource agent was displayed incorrectly: when not specified explicitly, its value was displayed as enabled while it was actually disabled, and vice versa. When the user made changes to the monitor_link value using luci , an incorrect value was stored. With this update, the monitor_link value is display properly, and the user can now view and modify the value as expected. BZ# 755092 The force_unmount option was not shown for file-system resources and the user could not change the configuration to enable or disable this option. A checkbox that displays its current state was added and the user can now view and change the force_unmount attribute of file-system resources. BZ# 800239 A new attribute, tunneled , was added to the VM (Virtual Machine) resource agent script. This update adds a checkbox displaying the current value of the tunneled attribute to the VM configuration screen so that the user can enable or disable the attribute. BZ# 772314 Previously, an ACL (Access Control List) system was added to allow delegation of permissions to other luci users. However, permissions could not be set for users until they had logged in at least once. With this update, ACLs can be added and changed before the user logs in to luci for the first time. BZ# 820402 Due to a regression, the Intel Modular and IF MIB fencing agents were removed from the list of devices for which users could configure new instances. Consequently, users could not create a new instance of these fencing devices. The Intel Modular and IF MIB fencing device entries have been added back to the list of fence devices and users are again able to create new instances of Intel Modular and IF MIB fencing devices. Enhancements BZ# 704978 In the Create and edit service groups form, the relationships between groups could not always be easily discerned. Solid borders were added along the side of resources within the forms to make the relationships between resources clearer. Also, when adding a resource to a service group, the screen is scrolled to the resource that was added. BZ# 740835 While creating and editing fail-over domains, the user could select and unselect checkboxes and enter values into text fields whose values were ignored. Such checkboxes and text fields are now disabled and become enabled only when their values are used. BZ# 758821 To provide an interface for working with the RRP (Redundant Ring Protocol) configuration in luci , Technology Preview support was added for RRP in the corosync cluster engine that the Red Hat HA stack is built upon. The Redundant Ring configuration tab is now available in the Configure tab of clusters to allow RRP configuration from luci. BZ# 786584 A new resource agent was added to provide high-availability of condor-related system daemons. This update adds support for viewing, creating, and editing the configuration of Condor resources to allow the user to configure the Condor resource agent. BZ# 707471 The reboot icon was similar to the refresh icon and the user could have mistakenly rebooted a cluster node instead of refreshing the status information. With this update, the reboot icon has been changed. Also, a dialog box is now displayed before reboot so the user must confirm their reboot request. Users of luci are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/luci |
Chapter 1. Preparing to deploy OpenShift Data Foundation | Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using the local storage devices provided by IBM Power, you can create internal cluster resources. This approach internally provisions base services so that all the applications can access additional storage classes.. Before you begin the deployment of Red Hat OpenShift Data Foundation using a local storage, ensure that you meet the resource requirements. See Requirements for installing OpenShift Data Foundation using local storage devices . Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Note If you are using Thales CipherTrust Manager as your KMS, you will enable it during deployment. After you have addressed the above, follow the below steps in the order given: Install Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Find available storage devices . Create OpenShift Data Foundation cluster on IBM Power . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker nodes in the cluster with locally attached storage devices on each of them. Each of the three selected nodes must have at least one raw block device available to be used by OpenShift Data Foundation. The devices to be used must be empty, that is, there should be no persistent volumes (PVs), volume groups (VGs), or local volumes (LVs) remaining on the disks. You must have a minimum of three labeled worker nodes. Each node that has local storage devices to be used by OpenShift Data Foundation must have a specific label to deploy OpenShift Data Foundation pods. To label the nodes, use the following command: For more information, see the Resource requirements section in the Planning guide. Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. | [
"oc label nodes <NodeNames> cluster.ocs.openshift.io/openshift-storage=''"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_ibm_power/preparing_to_deploy_openshift_data_foundation |
2.2. Examples | 2.2. Examples The following examples demonstrate how SELinux increases security: The default action is deny. If an SELinux policy rule does not exist to allow access, such as for a process opening a file, access is denied. SELinux can confine Linux users. A number of confined SELinux users exist in SELinux policy. Linux users can be mapped to confined SELinux users to take advantage of the security rules and mechanisms applied to them. For example, mapping a Linux user to the SELinux user_u user, results in a Linux user that is not able to run (unless configured otherwise) set user ID (setuid) applications, such as sudo and su , as well as preventing them from executing files and applications in their home directory. If configured, this prevents users from executing malicious files from their home directories. Process separation is used. Processes run in their own domains, preventing processes from accessing files used by other processes, as well as preventing processes from accessing other processes. For example, when running SELinux, unless otherwise configured, an attacker cannot compromise a Samba server, and then use that Samba server as an attack vector to read and write to files used by other processes, such as databases used by MySQL. SELinux helps limit the damage made by configuration mistakes. Domain Name System (DNS) servers often replicate information between each other in what is known as a zone transfer. Attackers can use zone transfers to update DNS servers with false information. When running the Berkeley Internet Name Domain (BIND) as a DNS server in Red Hat Enterprise Linux, even if an administrator forgets to limit which servers can perform a zone transfer, the default SELinux policy prevents zone files [3] from being updated via zone transfers, by the BIND named daemon itself, and by other processes. Refer to the Red Hat Magazine article, Risk report: Three years of Red Hat Enterprise Linux 4 [4] , for exploits that were restricted due to the default SELinux targeted policy in Red Hat Enterprise Linux 4. Refer to the NetworkWorld.com article, A seatbelt for server software: SELinux blocks real-world exploits [5] , for background information about SELinux, and information about various exploits that SELinux has prevented. Refer to James Morris's SELinux mitigates remote root vulnerability in OpenPegasus blog post for information about an exploit in OpenPegasus that was mitigated by SELinux as shipped with Red Hat Enterprise Linux 4 and 5. [3] Text files that include information, such as host name to IP address mappings, that are used by DNS servers. [4] Cox, Mark. "Risk report: Three years of Red Hat Enterprise Linux 4". Published 26 February 2008. Accessed 27 August 2009: http://magazine.redhat.com/2008/02/26/risk-report-three-years-of-red-hat-enterprise-linux-4/ . [5] Marti, Don. "A seatbelt for server software: SELinux blocks real-world exploits". Published 24 February 2008. Accessed 27 August 2009: http://www.networkworld.com/article/2283723/lan-wan/a-seatbelt-for-server-software--selinux-blocks-real-world-exploits.html . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-introduction-examples |
Providing feedback on Red Hat JBoss Web Server documentation | Providing feedback on Red Hat JBoss Web Server documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_1_release_notes/providing-direct-documentation-feedback_6.0.1_rn |
Chapter 9. Running a Camel service on Spring Boot with XA transactions | Chapter 9. Running a Camel service on Spring Boot with XA transactions The Spring Boot Camel XA transactions quickstart demonstrates how to run a Camel Service on Spring-Boot that supports XA transactions on two external transactional resources, a JMS resource (A-MQ) and a database (PostgreSQL). These external resources are provided by OpenShift which must be started before running this quickstart. 9.1. StatefulSet resources This quickstart uses OpenShift StatefulSet resources to guarantee uniqueness of transaction managers and require a PersistentVolume to store transaction logs. The application supports scaling on the StatefulSet resource. Each instance will have its own in-process recovery manager. A special controller guarantees that when the application is scaled down, all instances, that are terminated, complete all their work correctly without leaving pending transactions. The scale-down operation is rolled back by the controller if the recovery manager is not been able to flush all pending work before terminating. This quickstart uses Spring Boot Narayana recovery controller. 9.2. Spring Boot Narayana recovery controller The Spring Boot Narayana recovery controller allows to gracefully handle the scaling down phase of a StatefulSet by cleaning pending transactions before termination. If a scaling down operation is executed and the pod is not clean after termination, the number of replicas is restored, hence effectively canceling the scaling down operation. All pods of the StatefulSet require access to a shared volume that is used to store the termination status of each pod belonging to the StatefulSet. The pod-0 of the StatefulSet periodically checks the status and scale the StatefulSet to the right size if there's a mismatch. In order for the recovery controller to work, edit permissions on the current namespace are required (role binding is included in the set of resources published to OpenShift). The recovery controller can be disabled using the CLUSTER_RECOVERY_ENABLED environment variable. In this case, no special permissions are required on the service account but any scale down operation may leave pending transactions on the terminated pod without notice. 9.3. Configuring Spring Boot Narayana recovery controller Following example shows how to configure Narayana to work on OpenShift with the recovery controller. Procedure This is a sample application.properties file. Replace the following options in the Kubernetes yaml descriptor. You need a shared volume to store both transactions and information related to termination. It can be mounted in the StatefulSet yaml descriptor as follows. Camel Extension for Spring Boot Narayana Recovery Controller If Camel is found in the Spring Boot application context, the Camel context is automatically stopped before flushing all pending transactions. 9.4. Running Camel Spring Boot XA quickstart on OpenShift This procedure shows how to run the quickstart on a running single node OpenShift cluster. Procedure Download Camel Spring Boot XA project. Navigate to spring-boot-camel-xa directory and run following command. Log in to the OpenShift Server. Create a new project namespace called test (assuming it does not already exist). If the test project namespace already exists, switch to it. Install dependencies. Install postgresql using username as theuser and password as Thepassword1! . Install the A-MQ broker using username as theuser and password as Thepassword1! . Create a persistent volume claim for the transaction log. Build and deploy your quickstart. Scale it up to the desired number of replicas. Note: The pod name is used as transaction manager id (spring.jta.transaction-manager-id property). The current implementation also limits the length of transaction manager ids. So please note that: The name of the StatefulSet is an identifier for the transaction system, so it must not be changed. You should name the StatefulSet so that all of its pod names have length lower than or equal to 23 characters. Pod names are created by OpenShift using the convention: <statefulset-name>-0, <statefulset-name>-1 and so on. Narayana does its best to avoid having multiple recovery managers with the same id, so when the pod name is longer than the limit, the last 23 bytes are taken as transaction manager id (after stripping some characters like -). Once the quickstart is running, get the base service URL using the following command. 9.5. Testing successful XA transactions Following workflow shows how to test the successful XA transactions. Procedure Get the list of messages in the audit_log table. The list is empty at the beginning. Now you can put the first element. After waiting for some time get the new list. The new list contains two messages, hello and hello-ok . The hello-ok confirms that the message has been sent to a outgoing queue and then logged. You can add multiple messages and see the logs. 9.6. Testing failed XA transactions Following workflow shows how to test the failed XA transactions. Procedure Send a message named fail . After waiting for some time get the new list. This message produces an exception at the end of the route, so that the transaction is always rolled back. You should not find any trace of the message in the audit_log table. | [
"Cluster cluster.nodename=1 cluster.base-dir=./target/tx Transaction Data spring.jta.transaction-manager-id=USD{cluster.nodename} spring.jta.log-dir=USD{cluster.base-dir}/store/USD{cluster.nodename} Narayana recovery settings snowdrop.narayana.openshift.recovery.enabled=true snowdrop.narayana.openshift.recovery.current-pod-name=USD{cluster.nodename} You must enable resource filtering in order to inject the Maven artifactId snowdrop.narayana.openshift.recovery.statefulset=USD{project.artifactId} snowdrop.narayana.openshift.recovery.status-dir=USD{cluster.base-dir}/status",
"apiVersion: apps/v1 kind: StatefulSet # spec: # template: # spec: containers: - env: - name: CLUSTER_BASE_DIR value: /var/transaction/data # Override CLUSTER_NODENAME with Kubernetes Downward API (to use `pod-0`, `pod-1` etc. as tx manager id) - name: CLUSTER_NODENAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name # volumeMounts: - mountPath: /var/transaction/data name: the-name-of-the-shared-volume #",
"git clone --branch spring-boot-camel-xa-7.13.0.fuse-7_13_0-00011-redhat-00001 https://github.com/jboss-fuse/spring-boot-camel-xa",
"mvn clean install",
"login -u developer -p developer",
"new-project test",
"project test",
"new-app --param=POSTGRESQL_USER=theuser --param=POSTGRESQL_PASSWORD='Thepassword1!' --env=POSTGRESQL_MAX_PREPARED_TRANSACTIONS=100 --template=postgresql-persistent",
"new-app --param=MQ_USERNAME=theuser --param=MQ_PASSWORD='Thepassword1!' --template=amq63-persistent",
"create -f persistent-volume-claim.yml",
"mvn oc:deploy -Popenshift",
"scale statefulset spring-boot-camel-xa --replicas 3",
"NARAYANA_HOST=USD(oc get route spring-boot-camel-xa -o jsonpath={.spec.host})",
"curl -w \"\\n\" http://USDNARAYANA_HOST/api/",
"curl -w \"\\n\" -X POST http://USDNARAYANA_HOST/api/?entry=hello",
"curl -w \"\\n\" http://USDNARAYANA_HOST/api/",
"curl -w \"\\n\" -X POST http://USDNARAYANA_HOST/api/?entry=fail",
"curl -w \"\\n\" http://USDNARAYANA_HOST/api/"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/fuse_on_openshift_guide/camel-spring-boot-application-with-xa-transactions |
Installing on Alibaba | Installing on Alibaba OpenShift Container Platform 4.15 Installing OpenShift Container Platform on Alibaba Cloud Red Hat OpenShift Documentation Team | [
"Default client type = access_key # Certification type: access_key access_key_id = LTAI5t8cefXKmt # Key 1 access_key_secret = wYx56mszAN4Uunfh # Secret",
"{ \"Version\": \"1\", \"Statement\": [ { \"Action\": [ \"tag:ListTagResources\", \"tag:UntagResources\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"vpc:DescribeVpcs\", \"vpc:DeleteVpc\", \"vpc:DescribeVSwitches\", \"vpc:DeleteVSwitch\", \"vpc:DescribeEipAddresses\", \"vpc:DescribeNatGateways\", \"vpc:ReleaseEipAddress\", \"vpc:DeleteNatGateway\", \"vpc:DescribeSnatTableEntries\", \"vpc:CreateSnatEntry\", \"vpc:AssociateEipAddress\", \"vpc:ListTagResources\", \"vpc:TagResources\", \"vpc:DescribeVSwitchAttributes\", \"vpc:CreateVSwitch\", \"vpc:CreateNatGateway\", \"vpc:DescribeRouteTableList\", \"vpc:CreateVpc\", \"vpc:AllocateEipAddress\", \"vpc:ListEnhanhcedNatGatewayAvailableZones\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"ecs:ModifyInstanceAttribute\", \"ecs:DescribeSecurityGroups\", \"ecs:DeleteSecurityGroup\", \"ecs:DescribeSecurityGroupReferences\", \"ecs:DescribeSecurityGroupAttribute\", \"ecs:RevokeSecurityGroup\", \"ecs:DescribeInstances\", \"ecs:DeleteInstances\", \"ecs:DescribeNetworkInterfaces\", \"ecs:DescribeInstanceRamRole\", \"ecs:DescribeUserData\", \"ecs:DescribeDisks\", \"ecs:ListTagResources\", \"ecs:AuthorizeSecurityGroup\", \"ecs:RunInstances\", \"ecs:TagResources\", \"ecs:ModifySecurityGroupPolicy\", \"ecs:CreateSecurityGroup\", \"ecs:DescribeAvailableResource\", \"ecs:DescribeRegions\", \"ecs:AttachInstanceRamRole\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"pvtz:DescribeRegions\", \"pvtz:DescribeZones\", \"pvtz:DeleteZone\", \"pvtz:DeleteZoneRecord\", \"pvtz:BindZoneVpc\", \"pvtz:DescribeZoneRecords\", \"pvtz:AddZoneRecord\", \"pvtz:SetZoneRecordStatus\", \"pvtz:DescribeZoneInfo\", \"pvtz:DescribeSyncEcsHostTask\", \"pvtz:AddZone\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"slb:DescribeLoadBalancers\", \"slb:SetLoadBalancerDeleteProtection\", \"slb:DeleteLoadBalancer\", \"slb:SetLoadBalancerModificationProtection\", \"slb:DescribeLoadBalancerAttribute\", \"slb:AddBackendServers\", \"slb:DescribeLoadBalancerTCPListenerAttribute\", \"slb:SetLoadBalancerTCPListenerAttribute\", \"slb:StartLoadBalancerListener\", \"slb:CreateLoadBalancerTCPListener\", \"slb:ListTagResources\", \"slb:TagResources\", \"slb:CreateLoadBalancer\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"ram:ListResourceGroups\", \"ram:DeleteResourceGroup\", \"ram:ListPolicyAttachments\", \"ram:DetachPolicy\", \"ram:GetResourceGroup\", \"ram:CreateResourceGroup\", \"ram:DeleteRole\", \"ram:GetPolicy\", \"ram:DeletePolicy\", \"ram:ListPoliciesForRole\", \"ram:CreateRole\", \"ram:AttachPolicyToRole\", \"ram:GetRole\", \"ram:CreatePolicy\", \"ram:CreateUser\", \"ram:DetachPolicyFromRole\", \"ram:CreatePolicyVersion\", \"ram:DetachPolicyFromUser\", \"ram:ListPoliciesForUser\", \"ram:AttachPolicyToUser\", \"ram:CreateUser\", \"ram:GetUser\", \"ram:DeleteUser\", \"ram:CreateAccessKey\", \"ram:ListAccessKeys\", \"ram:DeleteAccessKey\", \"ram:ListUsers\", \"ram:ListPolicyVersions\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"oss:DeleteBucket\", \"oss:DeleteBucketTagging\", \"oss:GetBucketTagging\", \"oss:GetBucketCors\", \"oss:GetBucketPolicy\", \"oss:GetBucketLifecycle\", \"oss:GetBucketReferer\", \"oss:GetBucketTransferAcceleration\", \"oss:GetBucketLog\", \"oss:GetBucketWebSite\", \"oss:GetBucketInfo\", \"oss:PutBucketTagging\", \"oss:PutBucket\", \"oss:OpenOssService\", \"oss:ListBuckets\", \"oss:GetService\", \"oss:PutBucketACL\", \"oss:GetBucketLogging\", \"oss:ListObjects\", \"oss:GetObject\", \"oss:PutObject\", \"oss:DeleteObject\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"alidns:DescribeDomainRecords\", \"alidns:DeleteDomainRecord\", \"alidns:DescribeDomains\", \"alidns:DescribeDomainRecordInfo\", \"alidns:AddDomainRecord\", \"alidns:SetDomainRecordStatus\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": \"bssapi:CreateInstance\", \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": \"ram:PassRole\", \"Resource\": \"*\", \"Effect\": \"Allow\", \"Condition\": { \"StringEquals\": { \"acs:Service\": \"ecs.aliyuncs.com\" } } } ] }",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl alibabacloud create-ram-users --name <name> \\ 1 --region=<alibaba_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> 4",
"2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"ls <path_to_ccoctl_output_dir>/manifests",
"openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl alibabacloud create-ram-users --name <name> \\ 1 --region=<alibaba_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> 4",
"2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"ls <path_to_ccoctl_output_dir>/manifests",
"openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/",
"apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{\"auths\": {\"cloud.openshift.com\": {\"auth\": ... }' 8 sshKey: | ssh-rsa AAAA... 9",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{\"auths\": {\"cloud.openshift.com\": {\"auth\": ... }' 8 sshKey: | ssh-rsa AAAA... 9",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{\"auths\": {\"cloud.openshift.com\": {\"auth\": ... }' 8 sshKey: | ssh-rsa AAAA... 9",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl alibabacloud create-ram-users --name <name> \\ 1 --region=<alibaba_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> 4",
"2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"ls <path_to_ccoctl_output_dir>/manifests",
"openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"compute: platform: alibabacloud: imageID:",
"compute: platform: alibabacloud: instanceType:",
"compute: platform: alibabacloud: systemDiskCategory:",
"compute: platform: alibabacloud: systemDisksize:",
"compute: platform: alibabacloud: zones:",
"controlPlane: platform: alibabacloud: imageID:",
"controlPlane: platform: alibabacloud: instanceType:",
"controlPlane: platform: alibabacloud: systemDiskCategory:",
"controlPlane: platform: alibabacloud: systemDisksize:",
"controlPlane: platform: alibabacloud: zones:",
"platform: alibabacloud: region:",
"platform: alibabacloud: resourceGroupID:",
"platform: alibabacloud: tags:",
"platform: alibabacloud: vpcID:",
"platform: alibabacloud: vswitchIDs:",
"platform: alibabacloud: defaultMachinePlatform: imageID:",
"platform: alibabacloud: defaultMachinePlatform: instanceType:",
"platform: alibabacloud: defaultMachinePlatform: systemDiskCategory:",
"platform: alibabacloud: defaultMachinePlatform: systemDiskSize:",
"platform: alibabacloud: defaultMachinePlatform: zones:",
"platform: alibabacloud: privateZoneID:",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/installing_on_alibaba/index |
Appendix E. The GRUB Boot Loader | Appendix E. The GRUB Boot Loader When a computer running Linux is turned on, the operating system is loaded into memory by a special program called a boot loader . A boot loader usually exists on the system's primary hard drive (or other media device) and has the sole responsibility of loading the Linux kernel with its required files or (in some cases) other operating systems into memory. E.1. Boot Loaders and System Architecture Each architecture capable of running Red Hat Enterprise Linux uses a different boot loader. The following table lists the boot loaders available for each architecture: Table E.1. Boot Loaders by Architecture Architecture Boot Loaders AMD AMD64 GRUB IBM Power Systems yaboot IBM System z z/IPL x86 GRUB This appendix discusses commands and configuration options for the GRUB boot loader included with Red Hat Enterprise Linux for the x86 architecture. Important The /boot and / (root) partition in Red Hat Enterprise Linux 6.9 can only use the ext2, ext3, and ext4 (recommended) file systems. You cannot use any other file system for this partition, such as Btrfs, XFS, or VFAT. Other partitions, such as /home , can use any supported file system, including Btrfs and XFS (if available). See the following article on the Red Hat Customer Portal for additional information: https://access.redhat.com/solutions/667273 . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-grub |
Chapter 1. Introduction | Chapter 1. Introduction Red Hat OpenStack Platform director creates a cloud environment called the Overcloud . The Overcloud contains a set of different node types that perform certain roles. One of these node types is the Controller node. The Controller is responsible for Overcloud administration and uses specific OpenStack components. An Overcloud uses multiple Controllers together as a high availability cluster, which ensures maximum operational performance for your OpenStack services. In addition, the cluster provides load balancing for access to the OpenStack services, which evenly distributes traffic to the Controller nodes and reduces server overload for each node. It is also possible to use an external load balancer to perform this distribution. For example, an organization might use their own hardware-based load balancer to handle traffic distribution to the Controller nodes. This guide provides the necessary details to help define the configuration for both an external load balancer and the Overcloud creation. This involved the following process: Installing and Configuring the Load Balancer - This guide includes some HAProxy options for load balancing and services. Translate the settings to the equivalent of your own external load balancer. Configuring and Deploying the Overcloud - This guide includes some Heat template parameters help the Overcloud integrate with the external load balancer. This mainly involves the IP addresses of the load balancer and potential nodes. This guide also includes the command to start the Overcloud deployment and its configuration to use the external load balancer. 1.1. Using Load Balancing in the Overcloud The Overcloud uses a open source tool called HAProxy . HAProxy load-balances traffic to Controller nodes running OpenStack services. The haproxy package contains the haproxy daemon, which is started from the haproxy systemd service, along with logging features and sample configurations. However, the Overcloud also uses a high availability resource manager (Pacemaker) to control HAProxy itself as a highly available service (haproxy-clone). This means HAProxy runs on each Controller node and distributes traffic according to a set of rules defined in each configuration. 1.2. Defining an Example Scenario This article uses the following scenario as an example: An external load balacing server using HAProxy. This demonstrates how to use a federated HAProxy server. You can substitute this for another supported external load balancer. One OpenStack Platform director node An Overcloud that consists of: 3 Controller nodes in a highly available cluster 1 Compute node Network isolation with VLANs The scenario uses the following IP address assignments for each network: Internal API: 172.16.20.0/24 Project: 172.16.22.0/24 Storage: 172.16.21.0/24 Storage Management: 172.16.19.0/24 External: 172.16.23.0/24 These IP ranges will include IP assignments for the Controller nodes and virtual IPs that the load balancer binds to OpenStack services. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/external_load_balancing_for_the_overcloud/introduction |
Chapter 1. OpenShift Container Platform installation overview | Chapter 1. OpenShift Container Platform installation overview 1.1. About OpenShift Container Platform installation The OpenShift Container Platform installation program offers four methods for deploying a cluster which are detailed in the following list: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is an ideal approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the Agent-based Installer for disconnected environments or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first. Configuration is done with a command-line interface. This approach is ideal for disconnected environments. Automated : You can deploy a cluster on installer-provisioned infrastructure. The installation program uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters in connected or disconnected environments. Full control : You can deploy a cluster on infrastructure that you prepare and maintain, which provides maximum customizability. You can deploy clusters in connected or disconnected environments. Each method deploys a cluster with the following characteristics: Highly available infrastructure with no single points of failure, which is available by default. Administrators can control what updates are applied and when. 1.1.1. About the installation program You can use the installation program to deploy each type of cluster. The installation program generates the main assets, such as Ignition config files for the bootstrap, control plane, and compute machines. You can start an OpenShift Container Platform cluster with these three machine configurations, provided you correctly configured the infrastructure. The OpenShift Container Platform installation program uses a set of targets and dependencies to manage cluster installations. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel with the ultimate target being a running cluster. The installation program recognizes and uses existing components instead of running commands to create them again because the program meets the dependencies. Figure 1.1. OpenShift Container Platform installation targets and dependencies 1.1.2. About Red Hat Enterprise Linux CoreOS (RHCOS) Post-installation, each cluster machine uses Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. RHCOS is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. RHCOS includes the kubelet , which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes. Every control plane machine in an OpenShift Container Platform 4.16 cluster must use RHCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as a bootable container image, using OSTree as a backend, that is deployed across the cluster by the Machine Config Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree . Together, these technologies enable OpenShift Container Platform to manage the operating system like it manages any other application on the cluster, by in-place upgrades that keep the entire platform up to date. These in-place updates can reduce the burden on operations teams. If you use RHCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. 1.1.3. Glossary of common terms for OpenShift Container Platform installing The glossary defines common terms that relate to the installation content. Read the following list of terms to better understand the installation process. Assisted Installer An installer hosted at console.redhat.com that provides a web-based user interface or a RESTful API for creating a cluster configuration. The Assisted Installer generates a discovery image. Cluster machines boot with the discovery image, which installs RHCOS and an agent. Together, the Assisted Installer and agent provide preinstallation validation and installation for the cluster. Agent-based Installer An installer similar to the Assisted Installer, but you must download the Agent-based Installer first. The Agent-based Installer is ideal for disconnected environments. Bootstrap node A temporary machine that runs a minimal Kubernetes configuration required to deploy the OpenShift Container Platform control plane. Control plane A container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers. Also known as control plane machines. Compute node Nodes that are responsible for executing workloads for cluster users. Also known as worker nodes. Disconnected installation In some situations, parts of a data center might not have access to the internet, even through proxy servers. You can still install the OpenShift Container Platform in these environments, but you must download the required software and images and make them available to the disconnected environment. The OpenShift Container Platform installation program A program that provisions the infrastructure and deploys a cluster. Installer-provisioned infrastructure The installation program deploys and configures the infrastructure that the cluster runs on. Ignition config files A file that the Ignition tool uses to configure Red Hat Enterprise Linux CoreOS (RHCOS) during operating system initialization. The installation program generates different Ignition configuration files to initialize bootstrap, control plane, and worker nodes. Kubernetes manifests Specifications of a Kubernetes API object in a JSON or YAML format. A configuration file can include deployments, config maps, secrets, daemonsets, and so on. Kubelet A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. Load balancers A load balancer serves as the single point of contact for clients. Load balancers for the API distribute incoming traffic across control plane nodes. Machine Config Operator An Operator that manages and applies configurations and updates of the base operating system and container runtime, including everything between the kernel and kubelet, for the nodes in the cluster. Operators The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An operator takes human operational knowledge and encodes it into software that is easily packaged and shared with customers. User-provisioned infrastructure You can install OpenShift Container Platform on infrastructure that you provide. You can use the installation program to generate the assets required to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. 1.1.4. Installation process Except for the Assisted Installer, when you install an OpenShift Container Platform cluster, you must download the installation program from the appropriate Cluster Type page on the OpenShift Cluster Manager Hybrid Cloud Console. This console manages: REST API for accounts. Registry tokens, which are the pull secrets that you use to obtain the required components. Cluster registration, which associates the cluster identity to your Red Hat account to facilitate the gathering of usage metrics. In OpenShift Container Platform 4.16, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type. Consider the following installation use cases: To deploy a cluster with the Assisted Installer, you must configure the cluster settings by using the Assisted Installer . There is no installation program to download and configure. After you finish setting the cluster configuration, you download a discovery ISO and then boot cluster machines with that image. You can install clusters with the Assisted Installer on Nutanix, vSphere, and bare metal with full integration, and other platforms without integration. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. To deploy clusters with the Agent-based Installer, you can download the Agent-based Installer first. You can then configure the cluster and generate a discovery image. You boot cluster machines with the discovery image, which installs an agent that communicates with the installation program and handles the provisioning for you instead of you interacting with the installation program or setting up a provisioner machine yourself. You must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. This approach is ideal for disconnected environments. For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster, except if you install on bare metal. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. For the installation program, the program uses three sets of files during installation: an installation configuration file that is named install-config.yaml , Kubernetes manifests, and Ignition config files for your machine types. Important You can modify Kubernetes and the Ignition config files that control the underlying RHCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support. The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster. The installation configuration files are all pruned when you run the installation program, so be sure to back up all the configuration files that you want to use again. Important You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation. The installation process with the Assisted Installer Installation with the Assisted Installer involves creating a cluster configuration interactively by using the web-based user interface or the RESTful API. The Assisted Installer user interface prompts you for required values and provides reasonable default values for the remaining parameters, unless you change them in the user interface or with the API. The Assisted Installer generates a discovery image, which you download and use to boot the cluster machines. The image installs RHCOS and an agent, and the agent handles the provisioning for you. You can install OpenShift Container Platform with the Assisted Installer and full integration on Nutanix, vSphere, and bare metal. Additionally, you can install OpenShift Container Platform with the Assisted Installer on other platforms without integration. OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. If possible, use the Assisted Installer feature to avoid having to download and configure the Agent-based Installer. The installation process with Agent-based infrastructure Agent-based installation is similar to using the Assisted Installer, except that you must initially download and install the Agent-based Installer . An Agent-based installation is useful when you want the convenience of the Assisted Installer, but you need to install a cluster in a disconnected environment. If possible, use the Agent-based installation feature to avoid having to create a provisioner machine with a bootstrap VM, and then provision and maintain the cluster infrastructure. The installation process with installer-provisioned infrastructure The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster. You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. The installation process with user-provisioned infrastructure You can also install OpenShift Container Platform on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself. The following list details some of these self-managed resources: The underlying infrastructure for the control plane and compute machines that make up the cluster Load balancers Cluster networking, including the DNS records and required subnets Storage for the cluster infrastructure and applications If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL compute machines to your cluster. Installation process details When a cluster is provisioned, each machine in the cluster requires information about the cluster. OpenShift Container Platform uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. The temporary bootstrap machine boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process: Figure 1.2. Creating the bootstrap, control plane, and compute machines After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Consider using Ignition config files within 12 hours after they are generated, because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Bootstrapping a cluster involves the following steps: The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. If you provision the infrastructure, this step requires manual intervention. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. If you provision the infrastructure, this step requires manual intervention. The temporary control plane schedules the production control plane to the production control plane machines. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine injects OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. If you provision the infrastructure, this step requires manual intervention. The control plane sets up the compute nodes. The control plane installs additional services in the form of a set of Operators. The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operations, including the creation of compute machines in supported environments. Additional resources Red Hat OpenShift Network Calculator 1.1.5. Verifying node state after installation The OpenShift Container Platform installation completes when the following installation health checks are successful: The provisioner can access the OpenShift Container Platform web console. All control plane nodes are ready. All cluster Operators are available. Note After the installation completes, the specific cluster Operators responsible for the worker nodes continuously attempt to provision all worker nodes. Some time is required before all worker nodes report as READY . For installations on bare metal, wait a minimum of 60 minutes before troubleshooting a worker node. For installations on all other platforms, wait a minimum of 40 minutes before troubleshooting a worker node. A DEGRADED state for the cluster Operators responsible for the worker nodes depends on the Operators' own resources and not on the state of the nodes. After your installation completes, you can continue to monitor the condition of the nodes in your cluster. Prerequisites The installation program resolves successfully in the terminal. Procedure Show the status of all worker nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a Show the phase of all worker machine nodes: USD oc get machines -A Example output NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m Additional resources Getting the BareMetalHost resource Following the progress of the installation Validating an installation Agent-based Installer Assisted Installer for OpenShift Container Platform Installation scope The scope of the OpenShift Container Platform installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes. Additional resources See Available cluster customizations for details about OpenShift Container Platform configuration resources. 1.1.6. OpenShift Local overview OpenShift Local supports rapid application development to get started building OpenShift Container Platform clusters. OpenShift Local is designed to run on a local computer to simplify setup and testing, and to emulate the cloud development environment locally with all of the tools needed to develop container-based applications. Regardless of the programming language you use, OpenShift Local hosts your application and brings a minimal, preconfigured Red Hat OpenShift Container Platform cluster to your local PC without the need for a server-based infrastructure. On a hosted environment, OpenShift Local can create microservices, convert them into images, and run them in Kubernetes-hosted containers directly on your laptop or desktop running Linux, macOS, or Windows 10 or later. For more information about OpenShift Local, see Red Hat OpenShift Local Overview . 1.2. Supported platforms for OpenShift Container Platform clusters In OpenShift Container Platform 4.16, you can install a cluster that uses installer-provisioned infrastructure on the following platforms: Amazon Web Services (AWS) Bare metal Google Cloud Platform (GCP) IBM Cloud(R) Microsoft Azure Microsoft Azure Stack Hub Nutanix Red Hat OpenStack Platform (RHOSP) The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware vSphere For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat. Important After installation, the following changes are not supported: Mixing cloud provider platforms. Mixing cloud provider components. For example, using a persistent storage framework from a another platform on the platform where you installed the cluster. In OpenShift Container Platform 4.16, you can install a cluster that uses user-provisioned infrastructure on the following platforms: AWS Azure Azure Stack Hub Bare metal GCP IBM Power(R) IBM Z(R) or IBM(R) LinuxONE RHOSP The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware Cloud on AWS VMware vSphere Depending on the supported cases for the platform, you can perform installations on user-provisioned infrastructure, so that you can run machines with full internet access, place your cluster behind a proxy, or perform a disconnected installation. In a disconnected installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a disconnected installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access. The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms. Additional resources See Supported installation methods for different platforms for more information about the types of installations that are available for each supported platform. See Selecting a cluster installation method and preparing it for users for information about choosing an installation method and preparing the required resources. Red Hat OpenShift Network Calculator can help you design your cluster network during both the deployment and expansion phases. It addresses common questions related to the cluster network and provides output in a convenient JSON format. | [
"oc get nodes",
"NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a",
"oc get machines -A",
"NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installation_overview/ocp-installation-overview |
9.11. Initializing the Hard Disk | 9.11. Initializing the Hard Disk If no readable partition tables are found on existing hard disks, the installation program asks to initialize the hard disk. This operation makes any existing data on the hard disk unreadable. If your system has a brand new hard disk with no operating system installed, or you have removed all partitions on the hard disk, click Re-initialize drive . The installation program presents you with a separate dialog for each disk on which it cannot read a valid partition table. Click the Ignore all button or Re-initialize all button to apply the same answer to all devices. Figure 9.34. Warning screen - initializing hard drive Certain RAID systems or other nonstandard configurations may be unreadable to the installation program and the prompt to initialize the hard disk may appear. The installation program responds to the physical disk structures it is able to detect. To enable automatic initializing of hard disks for which it turns out to be necessary, use the kickstart command zerombr (refer to Chapter 32, Kickstart Installations ). This command is required when performing an unattended installation on a system with previously initialized disks. Warning If you have a nonstandard disk configuration that can be detached during installation and detected and configured afterward, power off the system, detach it, and restart the installation. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-initialize-hdd-x86 |
Chapter 4. Remote health monitoring with connected clusters | Chapter 4. Remote health monitoring with connected clusters 4.1. About remote health monitoring OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. The data that is provided to Red Hat enables the benefits outlined in this document. A cluster that reports data to Red Hat through Telemetry and the Insights Operator is considered a connected cluster . Telemetry is the term that Red Hat uses to describe the information being sent to Red Hat by the OpenShift Container Platform Telemeter Client. Lightweight attributes are sent from connected clusters to Red Hat to enable subscription management automation, monitor the health of clusters, assist with support, and improve customer experience. The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce insights about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators on OpenShift Cluster Manager . More information is provided in this document about these two processes. Telemetry and Insights Operator benefits Telemetry and the Insights Operator enable the following benefits for end-users: Enhanced identification and resolution of issues . Events that might seem normal to an end-user can be observed by Red Hat from a broader perspective across a fleet of clusters. Some issues can be more rapidly identified from this point of view and resolved without an end-user needing to open a support case or file a Jira issue . Advanced release management . OpenShift Container Platform offers the candidate , fast , and stable release channels, which enable you to choose an update strategy. The graduation of a release from fast to stable is dependent on the success rate of updates and on the events seen during upgrades. With the information provided by connected clusters, Red Hat can improve the quality of releases to stable channels and react more rapidly to issues found in the fast channels. Targeted prioritization of new features and functionality . The data collected provides insights about which areas of OpenShift Container Platform are used most. With this information, Red Hat can focus on developing the new features and functionality that have the greatest impact for our customers. A streamlined support experience . You can provide a cluster ID for a connected cluster when creating a support ticket on the Red Hat Customer Portal . This enables Red Hat to deliver a streamlined support experience that is specific to your cluster, by using the connected information. This document provides more information about that enhanced support experience. Predictive analytics . The insights displayed for your cluster on OpenShift Cluster Manager are enabled by the information collected from connected clusters. Red Hat is investing in applying deep learning, machine learning, and artificial intelligence automation to help identify issues that OpenShift Container Platform clusters are exposed to. 4.1.1. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document. This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Container Platform upgrades to customers to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All connected cluster information is used by Red Hat to help make OpenShift Container Platform better and more intuitive to use. Additional resources See the OpenShift Container Platform update documentation for more information about updating or upgrading a cluster. 4.1.1.1. Information collected by Telemetry The following information is collected by Telemetry: 4.1.1.1.1. System information Version information, including the OpenShift Container Platform cluster version and installed update details that are used to determine update version availability Update information, including the number of updates available per cluster, the channel and image repository used for an update, update progress information, and the number of errors that occur in an update The unique random identifier that is generated during an installation Configuration details that help Red Hat Support to provide beneficial support for customers, including node configuration at the cloud infrastructure level, hostnames, IP addresses, Kubernetes pod names, namespaces, and services The OpenShift Container Platform framework components installed in a cluster and their condition and status Events for all namespaces listed as "related objects" for a degraded Operator Information about degraded software Information about the validity of certificates The name of the provider platform that OpenShift Container Platform is deployed on and the data center location 4.1.1.1.2. Sizing Information Sizing information about clusters, machine types, and machines, including the number of CPU cores and the amount of RAM used for each The number of etcd members and the number of objects stored in the etcd cluster Number of application builds by build strategy type 4.1.1.1.3. Usage information Usage information about components, features, and extensions Usage details about Technology Previews and unsupported configurations Telemetry does not collect identifying information such as usernames or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the Red Hat Privacy Statement for more information about Red Hat's privacy practices. Additional resources See Showing data collected by Telemetry for details about how to list the attributes that Telemetry gathers from Prometheus in OpenShift Container Platform. See the upstream cluster-monitoring-operator source code for a list of the attributes that Telemetry gathers from Prometheus. Telemetry is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . 4.1.2. About the Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Container Platform can display the report of each cluster in the Insights Advisor service on Red Hat Hybrid Cloud Console. If any issues have been identified, Insights provides further details and, if available, steps on how to solve a problem. The Insights Operator does not collect identifying information, such as user names, passwords, or certificates. See Red Hat Insights Data & Application Security for information about Red Hat Insights data collection and controls. Red Hat uses all connected cluster information to: Identify potential cluster issues and provide a solution and preventive actions in the Insights Advisor service on Red Hat Hybrid Cloud Console Improve OpenShift Container Platform by providing aggregated and critical information to product and support teams Make OpenShift Container Platform more intuitive Additional resources The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . 4.1.2.1. Information collected by the Insights Operator The following information is collected by the Insights Operator: General information about your cluster and its components to identify issues that are specific to your OpenShift Container Platform version and environment Configuration files, such as the image registry configuration, of your cluster to determine incorrect settings and issues that are specific to parameters you set Errors that occur in the cluster components Progress information of running updates, and the status of any component upgrades Details of the platform that OpenShift Container Platform is deployed on, such as Amazon Web Services, and the region that the cluster is located in Cluster workload information transformed into discreet Secure Hash Algorithm (SHA) values, which allows Red Hat to assess workloads for security and version vulnerabilities without disclosing sensitive details If an Operator reports an issue, information is collected about core OpenShift Container Platform pods in the openshift-* and kube-* projects. This includes state, resource, security context, volume information, and more. Additional resources See Showing data collected by the Insights Operator for details about how to review the data that is collected by the Insights Operator. The Insights Operator source code is available for review and contribution. See the Insights Operator upstream project for a list of the items collected by the Insights Operator. 4.1.3. Understanding Telemetry and Insights Operator data flow The Telemeter Client collects selected time series data from the Prometheus API. The time series data is uploaded to api.openshift.com every four minutes and thirty seconds for processing. The Insights Operator gathers selected data from the Kubernetes API and the Prometheus API into an archive. The archive is uploaded to OpenShift Cluster Manager every two hours for processing. The Insights Operator also downloads the latest Insights analysis from OpenShift Cluster Manager . This is used to populate the Insights status pop-up that is included in the Overview page in the OpenShift Container Platform web console. All of the communication with Red Hat occurs over encrypted channels by using Transport Layer Security (TLS) and mutual certificate authentication. All of the data is encrypted in transit and at rest. Access to the systems that handle customer data is controlled through multi-factor authentication and strict authorization controls. Access is granted on a need-to-know basis and is limited to required operations. Telemetry and Insights Operator data flow Additional resources See Monitoring overview for more information about the OpenShift Container Platform monitoring stack. See Configuring your firewall for details about configuring a firewall and enabling endpoints for Telemetry and Insights 4.1.4. Additional details about how remote health monitoring data is used The information collected to enable remote health monitoring is detailed in Information collected by Telemetry and Information collected by the Insights Operator . As further described in the preceding sections of this document, Red Hat collects data about your use of the Red Hat Product(s) for purposes such as providing support and upgrades, optimizing performance or configuration, minimizing service impacts, identifying and remediating threats, troubleshooting, improving the offerings and user experience, responding to issues, and for billing purposes if applicable. Collection safeguards Red Hat employs technical and organizational measures designed to protect the telemetry and configuration data. Sharing Red Hat may share the data collected through Telemetry and the Insights Operator internally within Red Hat to improve your user experience. Red Hat may share telemetry and configuration data with its business partners in an aggregated form that does not identify customers to help the partners better understand their markets and their customers' use of Red Hat offerings or to ensure the successful integration of products jointly supported by those partners. Third parties Red Hat may engage certain third parties to assist in the collection, analysis, and storage of the Telemetry and configuration data. User control / enabling and disabling telemetry and configuration data collection You may disable OpenShift Container Platform Telemetry and the Insights Operator by following the instructions in Opting out of remote health reporting . 4.2. Showing data collected by remote health monitoring As an administrator, you can review the metrics collected by Telemetry and the Insights Operator. 4.2.1. Showing data collected by Telemetry You can see the cluster and components time series data captured by Telemetry. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has either the cluster-admin role or the cluster-monitoring-view role. Procedure Find the URL for the Prometheus service that runs in the OpenShift Container Platform cluster: USD oc get route prometheus-k8s -n openshift-monitoring -o jsonpath="{.spec.host}" Navigate to the URL. Enter this query in the Expression input box and press Execute : This query replicates the request that Telemetry makes against a running OpenShift Container Platform cluster's Prometheus service and returns the full set of time series captured by Telemetry. 4.2.2. Showing data collected by the Insights Operator You can review the data that is collected by the Insights Operator. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Find the name of the currently running pod for the Insights Operator: USD INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running) Copy the recent data archives collected by the Insights Operator: USD oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data The recent Insights Operator archives are now available in the insights-data directory. 4.3. Opting out of remote health reporting You may choose to opt out of reporting health and usage data for your cluster. To opt out of remote health reporting, you must: Modify the global cluster pull secret to disable remote health reporting. Update the cluster to use this modified pull secret. 4.3.1. Consequences of disabling remote health reporting In OpenShift Container Platform, customers can opt out of reporting usage information. However, connected clusters allow Red Hat to react more quickly to problems and better support our customers, as well as better understand how product upgrades impact clusters. Connected clusters also help to simplify the subscription and entitlement process and enable the OpenShift Cluster Manager service to provide an overview of your clusters and their subscription status. Red Hat strongly recommends leaving health and usage reporting enabled for pre-production and test clusters even if it is necessary to opt out for production clusters. This allows Red Hat to be a participant in qualifying OpenShift Container Platform in your environments and react more rapidly to product issues. Some of the consequences of opting out of having a connected cluster are: Red Hat will not be able to monitor the success of product upgrades or the health of your clusters without a support case being opened. Red Hat will not be able to use configuration data to better triage customer support cases and identify which configurations our customers find important. The OpenShift Cluster Manager will not show data about your clusters including health and usage information. Your subscription entitlement information must be manually entered via console.redhat.com without the benefit of automatic usage reporting. In restricted networks, Telemetry and Insights data can still be reported through appropriate configuration of your proxy. 4.3.2. Modifying the global cluster pull secret to disable remote health reporting You can modify your existing global cluster pull secret to disable remote health reporting. This disables both Telemetry and the Insights Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Download the global cluster pull secret to your local file system. USD oc extract secret/pull-secret -n openshift-config --to=. In a text editor, edit the .dockerconfigjson file that was downloaded. Remove the cloud.openshift.com JSON entry, for example: "cloud.openshift.com":{"auth":"<hash>","email":"<email_address>"} Save the file. You can now update your cluster to use this modified pull secret. 4.3.3. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot. 4.4. Using Insights to identify issues with your cluster Insights repeatedly analyzes the data Insights Operator sends. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. 4.4.1. About Red Hat Insights Advisor for OpenShift Container Platform You can use Insights Advisor to assess and monitor the health of your OpenShift Container Platform clusters. Whether you are concerned about individual clusters, or with your whole infrastructure, it is important to be aware of your exposure to issues that can affect service availability, fault tolerance, performance, or security. Insights repeatedly analyzes the data that Insights Operator sends using a database of recommendations , which are sets of conditions that can leave your OpenShift Container Platform clusters at risk. Your data is then uploaded to the Insights Advisor service on Red Hat Hybrid Cloud Console where you can perform the following actions: See clusters impacted by a specific recommendation. Use robust filtering capabilities to refine your results to those recommendations. Learn more about individual recommendations, details about the risks they present, and get resolutions tailored to your individual clusters. Share results with other stakeholders. 4.4.2. Understanding Insights Advisor recommendations Insights Advisor bundles information about various cluster states and component configurations that can negatively affect the service availability, fault tolerance, performance, or security of your clusters. This information set is called a recommendation in Insights Advisor and includes the following information: Name: A concise description of the recommendation Added: When the recommendation was published to the Insights Advisor archive Category: Whether the issue has the potential to negatively affect service availability, fault tolerance, performance, or security Total risk: A value derived from the likelihood that the condition will negatively affect your infrastructure, and the impact on operations if that were to happen Clusters: A list of clusters on which a recommendation is detected Description: A brief synopsis of the issue, including how it affects your clusters Link to associated topics: More information from Red Hat about the issue 4.4.3. Displaying potential issues with your cluster This section describes how to display the Insights report in Insights Advisor on OpenShift Cluster Manager . Note that Insights repeatedly analyzes your cluster and shows the latest results. These results can change, for example, if you fix an issue or a new issue has been detected. Prerequisites Your cluster is registered on OpenShift Cluster Manager . Remote health reporting is enabled, which is the default. You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Depending on the result, Insights Advisor displays one of the following: No matching recommendations found , if Insights did not identify any issues. A list of issues Insights has detected, grouped by risk (low, moderate, important, and critical). No clusters yet , if Insights has not yet analyzed the cluster. The analysis starts shortly after the cluster has been installed, registered, and connected to the internet. If any issues are displayed, click the > icon in front of the entry for more details. Depending on the issue, the details can also contain a link to more information from Red Hat about the issue. 4.4.4. Displaying all Insights Advisor recommendations The Recommendations view, by default, only displays the recommendations that are detected on your clusters. However, you can view all of the recommendations in the advisor archive. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on Red Hat Hybrid Cloud Console. You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Click the X icons to the Clusters Impacted and Status filters. You can now browse through all of the potential recommendations for your cluster. 4.4.5. Disabling Insights Advisor recommendations You can disable specific recommendations that affect your clusters, so that they no longer appear in your reports. It is possible to disable a recommendation for a single cluster or all of your clusters. Note Disabling a recommendation for all of your clusters also applies to any future clusters. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on OpenShift Cluster Manager . You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Click the name of the recommendation to disable. You are directed to the single recommendation page. To disable the recommendation for a single cluster: Click the Options menu for that cluster, and then click Disable recommendation for cluster . Enter a justification note and click Save . To disable the recommendation for all of your clusters: Click Actions Disable recommendation . Enter a justification note and click Save . 4.4.6. Enabling a previously disabled Insights Advisor recommendation When a recommendation is disabled for all clusters, you will no longer see the recommendation in Insights Advisor. You can change this behavior. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on OpenShift Cluster Manager . You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Filter the recommendations by Status Disabled . Locate the recommendation to enable. Click the Options menu , and then click Enable recommendation . 4.4.7. Displaying the Insights status in the web console Insights repeatedly analyzes your cluster and you can display the status of identified potential issues of your cluster in the OpenShift Container Platform web console. This status shows the number of issues in the different categories and, for further details, links to the reports in OpenShift Cluster Manager . Prerequisites Your cluster is registered in OpenShift Cluster Manager . Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console. Procedure Navigate to Home Overview in the OpenShift Container Platform web console. Click Insights on the Status card. The pop-up window lists potential issues grouped by risk. Click the individual categories or View all recommendations in Insights Advisor to display more details. 4.5. Using Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. Additional resources The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . For more information on using Insights Advisor to identify issues with your cluster, see Using Insights to identify issues with your cluster . 4.5.1. Downloading your Insights Operator archive Insights Operator stores gathered data in an archive located in the openshift-insights namespace of your cluster. You can download and review the data that is gathered by the Insights Operator. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Find the name of the running pod for the Insights Operator: USD oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running Copy the recent data archives collected by the Insights Operator: USD oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1 1 Replace <insights_operator_pod_name> with the pod name output from the preceding command. The recent Insights Operator archives are now available in the insights-data directory. 4.5.2. Viewing Insights Operator gather durations You can view the time it takes for the Insights Operator to gather the information contained in the archive. This helps you to understand Insights Operator resource usage and issues with Insights Advisor. Prerequisites A recent copy of your Insights Operator archive. Procedure From your archive, open /insights-operator/gathers.json . The file contains a list of Insights Operator gather operations: { "name": "clusterconfig/authentication", "duration_in_ms": 730, 1 "records_count": 1, "errors": null, "panic": null } 1 duration_in_ms is the amount of time in milliseconds for each gather operation. Inspect each gather operation for abnormalities. 4.6. Using remote health reporting in a restricted network You can manually gather and upload Insights Operator archives to diagnose issues from a restricted network. To use the Insights Operator in a restricted network, you must: Create a copy of your Insights Operator archive. Upload the Insights Operator archive to console.redhat.com . Additionally, you can choose to obfuscate the Insights Operator data before upload. 4.6.1. Running an Insights Operator gather operation You must run a gather operation to create an Insights Operator archive. Prerequisites You are logged in to OpenShift Container Platform as cluster-admin . Procedure Create a file named gather-job.yaml using this template: apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}] Copy your insights-operator image version: USD oc get -n openshift-insights deployment insights-operator -o yaml Paste your image version in gather-job.yaml : initContainers: - name: insights-operator image: <your_insights_operator_image_version> terminationMessagePolicy: FallbackToLogsOnError volumeMounts: Create the gather job: USD oc apply -n openshift-insights -f gather-job.yaml Find the name of the job pod: USD oc describe -n openshift-insights job/insights-operator-job Example output Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job- <your_job> where insights-operator-job- <your_job> is the name of the pod. Verify that the operation has finished: USD oc logs -n openshift-insights insights-operator-job- <your_job> insights-operator Example output I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms Save the created archive: USD oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data Clean up the job: USD oc delete -n openshift-insights job insights-operator-job 4.6.2. Uploading an Insights Operator archive You can manually upload an Insights Operator archive to console.redhat.com to diagnose potential issues. Prerequisites You are logged in to OpenShift Container Platform as cluster-admin . You have a workstation with unrestricted internet access. You have created a copy of the Insights Operator archive. Procedure Download the dockerconfig.json file: USD oc extract secret/pull-secret -n openshift-config --to=. Copy your "cloud.openshift.com" "auth" token from the dockerconfig.json file: { "auths": { "cloud.openshift.com": { "auth": " <your_token> ", "email": "[email protected]" } } Upload the archive to console.redhat.com : USD curl -v -H "User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> " -H "Authorization: Bearer <your_token> " -F "upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar" https://console.redhat.com/api/ingress/v1/upload where <cluster_id> is your cluster ID, <your_token> is the token from your pull secret, and <path_to_archive> is the path to the Insights Operator archive. If the operation is successful, the command returns a "request_id" and "account_number" : Example output * Connection #0 to host console.redhat.com left intact {"request_id":"393a7cf1093e434ea8dd4ab3eb28884c","upload":{"account_number":"6274079"}}% Verification steps Log in to https://console.redhat.com/openshift . Click the Clusters menu in the left pane. To display the details of the cluster, click the cluster name. Open the Insights Advisor tab of the cluster. If the upload was successful, the tab displays one of the following: Your cluster passed all recommendations , if Insights Advisor did not identify any issues. A list of issues that Insights Advisor has detected, prioritized by risk (low, moderate, important, and critical). 4.6.3. Enabling Insights Operator data obfuscation You can enable obfuscation to mask sensitive and identifiable IPv4 addresses and cluster base domains that the Insights Operator sends to console.redhat.com . Warning Although this feature is available, Red Hat recommends keeping obfuscation disabled for a more effective support experience. Obfuscation assigns non-identifying values to cluster IPv4 addresses, and uses a translation table that is retained in memory to change IP addresses to their obfuscated versions throughout the Insights Operator archive before uploading the data to console.redhat.com . For cluster base domains, obfuscation changes the base domain to a hardcoded substring. For example, cluster-api.openshift.example.com becomes cluster-api.<CLUSTER_BASE_DOMAIN> . The following procedure enables obfuscation using the support secret in the openshift-config namespace. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . Procedure Navigate to Workloads Secrets . Select the openshift-config project. Search for the support secret using the Search by name field. If it does not exist, click Create Key/value secret to create it. Click the Options menu , and then click Edit Secret . Click Add Key/Value . Create a key named enableGlobalObfuscation with a value of true , and click Save . Navigate to Workloads Pods Select the openshift-insights project. Find the insights-operator pod. To restart the insights-operator pod, click the Options menu , and then click Delete Pod . Verification Navigate to Workloads Secrets . Select the openshift-insights project. Search for the obfuscation-translation-table secret using the Search by name field. If the obfuscation-translation-table secret exists, then obfuscation is enabled and working. Alternatively, you can inspect /insights-operator/gathers.json in your Insights Operator archive for the value "is_global_obfuscation_enabled": true . Additional resources For more information on how to download your Insights Operator archive, see Showing data collected by the Insights Operator . 4.7. Importing simple content access certificates with Insights Operator Insights Operator can import your RHEL Simple Content Access (SCA) certificates from on Red Hat Hybrid Cloud Console . SCA is a capability in Red Hat's subscription tools which simplifies the behavior of the entitlement tooling. It is easier to consume the content provided by your Red Hat subscriptions without the complexity of configuring subscription tooling. After importing the certificates, they are stored in the etc-pki-entitlement secret in the openshift-config-managed namespace. Insights Operator imports SCA certificates every 8 hours by default, but can be configured or disabled using the support secret in the openshift-config namespace. In OpenShift Container Platform 4.9, this feature is in Technology Preview and must be enabled using the TechPreviewNoUpgrade Feature Set. See Enabling OpenShift Container Platform features using FeatureGates for more information. For more information about Simple Content Access certificates see the Simple Content Access article in the Red Hat Knowledgebase. Important InsightsOperatorPullingSCA is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 4.7.1. Configuring simple content access import interval You can configure how often the Insights Operator imports the RHEL Simple Content Access (SCA) certificates using the support secret in the openshift-config namespace. The certificate import normally occurs every 8 hours, but you may want to shorten this interval if you update your SCA configuration in Red Hat Subscription Management. This procedure describes how to update the import interval to one hour. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . Procedure Navigate to Workloads Secrets . Select the openshift-config project. Search for the support secret using the Search by name field. If it does not exist, click Create Key/value secret to create it. Click the Options menu , and then click Edit Secret . Click Add Key/Value . Create a key named ocmInterval with a value of 1h , and click Save . Note The interval 1h can also be entered as 60m for 60 minutes. Navigate to Workloads Pods Select the openshift-insights project. Find the insights-operator pod. To restart the insights-operator pod, click the Options menu , and then click Delete Pod . 4.7.2. Disabling simple content access import You can disable the import of RHEL Simple Content Access certificates using the support secret in the openshift-config namespace. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . Procedure Navigate to Workloads Secrets . Select the openshift-config project. Search for the support secret using the Search by name field. If it does not exist, click Create Key/value secret to create it. Click the Options menu , and then click Edit Secret . Click Add Key/Value . Create a key named ocmPullDisabled with a value of true , and click Save . Navigate to Workloads Pods Select the openshift-insights project. Find the insights-operator pod. To restart the insights-operator pod, click the Options menu , and then click Delete Pod . | [
"oc get route prometheus-k8s -n openshift-monitoring -o jsonpath=\"{.spec.host}\"",
"{__name__=~\"cluster:usage:.*|count:up0|count:up1|cluster_version|cluster_version_available_updates|cluster_operator_up|cluster_operator_conditions|cluster_version_payload|cluster_installer|cluster_infrastructure_provider|cluster_feature_set|instance:etcd_object_counts:sum|ALERTS|code:apiserver_request_total:rate:sum|cluster:capacity_cpu_cores:sum|cluster:capacity_memory_bytes:sum|cluster:cpu_usage_cores:sum|cluster:memory_usage_bytes:sum|openshift:cpu_usage_cores:sum|openshift:memory_usage_bytes:sum|workload:cpu_usage_cores:sum|workload:memory_usage_bytes:sum|cluster:virt_platform_nodes:sum|cluster:node_instance_type_count:sum|cnv:vmi_status_running:count|node_role_os_version_machine:cpu_capacity_cores:sum|node_role_os_version_machine:cpu_capacity_sockets:sum|subscription_sync_total|csv_succeeded|csv_abnormal|ceph_cluster_total_bytes|ceph_cluster_total_used_raw_bytes|ceph_health_status|job:ceph_osd_metadata:count|job:kube_pv:count|job:ceph_pools_iops:total|job:ceph_pools_iops_bytes:total|job:ceph_versions_running:count|job:noobaa_total_unhealthy_buckets:sum|job:noobaa_bucket_count:sum|job:noobaa_total_object_count:sum|noobaa_accounts_num|noobaa_total_usage|console_url|cluster:network_attachment_definition_instances:max|cluster:network_attachment_definition_enabled_instance_up:max|insightsclient_request_send_total|cam_app_workload_migrations|cluster:apiserver_current_inflight_requests:sum:max_over_time:2m|cluster:telemetry_selected_series:count\",alertstate=~\"firing|\"}",
"INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)",
"oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data",
"oc extract secret/pull-secret -n openshift-config --to=.",
"\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"<email_address>\"}",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running",
"oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1",
"{ \"name\": \"clusterconfig/authentication\", \"duration_in_ms\": 730, 1 \"records_count\": 1, \"errors\": null, \"panic\": null }",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}]",
"oc get -n openshift-insights deployment insights-operator -o yaml",
"initContainers: - name: insights-operator image: <your_insights_operator_image_version> terminationMessagePolicy: FallbackToLogsOnError volumeMounts:",
"oc apply -n openshift-insights -f gather-job.yaml",
"oc describe -n openshift-insights job/insights-operator-job",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job- <your_job>",
"oc logs -n openshift-insights insights-operator-job- <your_job> insights-operator",
"I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms",
"oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data",
"oc delete -n openshift-insights job insights-operator-job",
"oc extract secret/pull-secret -n openshift-config --to=.",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \"[email protected]\" } }",
"curl -v -H \"User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> \" -H \"Authorization: Bearer <your_token> \" -F \"upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar\" https://console.redhat.com/api/ingress/v1/upload",
"* Connection #0 to host console.redhat.com left intact {\"request_id\":\"393a7cf1093e434ea8dd4ab3eb28884c\",\"upload\":{\"account_number\":\"6274079\"}}%"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/support/remote-health-monitoring-with-connected-clusters |
Chapter 1. About instances | Chapter 1. About instances Instances are the individual virtual machines that run on physical Compute nodes inside the cloud. To launch an instance, you need a flavor and either an image or a bootable volume. When you use an image to launch an instance, the provided image becomes the base image that contains a virtual disk installed with a bootable operating system. Each instance requires a root disk, which we refer to as the instance disk. The Compute service (nova) resizes the instance disk to match the specifications of the flavor that you specified for the instance. Images are managed by the Image Service (glance). The Image Service image store contains a number of predefined images. The Compute nodes provide the available vCPU, memory, and local disk resources for instances. The Block Storage service (cinder) provides predefined volumes. Instance disk data is stored either in ephemeral storage, which is deleted when you delete the instance, or in a persistent volume provided by the Block Storage service. The Compute service is the central component that provides instances on demand. The Compute service creates, schedules, and manages instances, and interacts with the Identity service for authentication, the Image service for the images used to launch instances, and the Dashboard service (horizon) for the user and administrative interface. As a cloud user, you interact with the Compute service when you create and manage your instances. You can create and manage your instances by using the OpenStack CLI or the Dashboard. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_instances/about_instances |
Chapter 3. Authentication [config.openshift.io/v1] | Chapter 3. Authentication [config.openshift.io/v1] Description Authentication specifies cluster-wide settings for authentication (like OAuth and webhook token authenticators). The canonical name of an instance is cluster . Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 3.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description oauthMetadata object oauthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for an external OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 If oauthMetadata.name is non-empty, this value has precedence over any metadata reference stored in status. The key "oauthMetadata" is used to locate the data. If specified and the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config. serviceAccountIssuer string serviceAccountIssuer is the identifier of the bound service account token issuer. The default is https://kubernetes.default.svc WARNING: Updating this field will not result in immediate invalidation of all bound tokens with the issuer value. Instead, the tokens issued by service account issuer will continue to be trusted for a time period chosen by the platform (currently set to 24h). This time period is subject to change over time. This allows internal components to transition to use new service account issuer without service distruption. type string type identifies the cluster managed, user facing authentication mode in use. Specifically, it manages the component that responds to login attempts. The default is IntegratedOAuth. webhookTokenAuthenticator object webhookTokenAuthenticator configures a remote token reviewer. These remote authentication webhooks can be used to verify bearer tokens via the tokenreviews.authentication.k8s.io REST API. This is required to honor bearer tokens that are provisioned by an external authentication service. webhookTokenAuthenticators array webhookTokenAuthenticators is DEPRECATED, setting it has no effect. webhookTokenAuthenticators[] object deprecatedWebhookTokenAuthenticator holds the necessary configuration options for a remote token authenticator. It's the same as WebhookTokenAuthenticator but it's missing the 'required' validation on KubeConfig field. 3.1.2. .spec.oauthMetadata Description oauthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for an external OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 If oauthMetadata.name is non-empty, this value has precedence over any metadata reference stored in status. The key "oauthMetadata" is used to locate the data. If specified and the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 3.1.3. .spec.webhookTokenAuthenticator Description webhookTokenAuthenticator configures a remote token reviewer. These remote authentication webhooks can be used to verify bearer tokens via the tokenreviews.authentication.k8s.io REST API. This is required to honor bearer tokens that are provisioned by an external authentication service. Type object Required kubeConfig Property Type Description kubeConfig object kubeConfig references a secret that contains kube config file data which describes how to access the remote webhook service. The namespace for the referenced secret is openshift-config. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. 3.1.4. .spec.webhookTokenAuthenticator.kubeConfig Description kubeConfig references a secret that contains kube config file data which describes how to access the remote webhook service. The namespace for the referenced secret is openshift-config. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 3.1.5. .spec.webhookTokenAuthenticators Description webhookTokenAuthenticators is DEPRECATED, setting it has no effect. Type array 3.1.6. .spec.webhookTokenAuthenticators[] Description deprecatedWebhookTokenAuthenticator holds the necessary configuration options for a remote token authenticator. It's the same as WebhookTokenAuthenticator but it's missing the 'required' validation on KubeConfig field. Type object Property Type Description kubeConfig object kubeConfig contains kube config file data which describes how to access the remote webhook service. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. The namespace for this secret is determined by the point of use. 3.1.7. .spec.webhookTokenAuthenticators[].kubeConfig Description kubeConfig contains kube config file data which describes how to access the remote webhook service. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. The namespace for this secret is determined by the point of use. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 3.1.8. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description integratedOAuthMetadata object integratedOAuthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for the in-cluster integrated OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 This contains the observed value based on cluster state. An explicitly set value in spec.oauthMetadata has precedence over this field. This field has no meaning if authentication spec.type is not set to IntegratedOAuth. The key "oauthMetadata" is used to locate the data. If the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config-managed. 3.1.9. .status.integratedOAuthMetadata Description integratedOAuthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for the in-cluster integrated OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 This contains the observed value based on cluster state. An explicitly set value in spec.oauthMetadata has precedence over this field. This field has no meaning if authentication spec.type is not set to IntegratedOAuth. The key "oauthMetadata" is used to locate the data. If the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config-managed. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 3.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/authentications DELETE : delete collection of Authentication GET : list objects of kind Authentication POST : create an Authentication /apis/config.openshift.io/v1/authentications/{name} DELETE : delete an Authentication GET : read the specified Authentication PATCH : partially update the specified Authentication PUT : replace the specified Authentication /apis/config.openshift.io/v1/authentications/{name}/status GET : read status of the specified Authentication PATCH : partially update status of the specified Authentication PUT : replace status of the specified Authentication 3.2.1. /apis/config.openshift.io/v1/authentications Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Authentication Table 3.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Authentication Table 3.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.5. HTTP responses HTTP code Reponse body 200 - OK AuthenticationList schema 401 - Unauthorized Empty HTTP method POST Description create an Authentication Table 3.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.7. Body parameters Parameter Type Description body Authentication schema Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 202 - Accepted Authentication schema 401 - Unauthorized Empty 3.2.2. /apis/config.openshift.io/v1/authentications/{name} Table 3.9. Global path parameters Parameter Type Description name string name of the Authentication Table 3.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Authentication Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.12. Body parameters Parameter Type Description body DeleteOptions schema Table 3.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Authentication Table 3.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.15. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Authentication Table 3.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.17. Body parameters Parameter Type Description body Patch schema Table 3.18. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Authentication Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body Authentication schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty 3.2.3. /apis/config.openshift.io/v1/authentications/{name}/status Table 3.22. Global path parameters Parameter Type Description name string name of the Authentication Table 3.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Authentication Table 3.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.25. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Authentication Table 3.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.27. Body parameters Parameter Type Description body Patch schema Table 3.28. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Authentication Table 3.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.30. Body parameters Parameter Type Description body Authentication schema Table 3.31. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/config_apis/authentication-config-openshift-io-v1 |
Curating collections using namespaces in Automation Hub | Curating collections using namespaces in Automation Hub Red Hat Ansible Automation Platform 2.3 Use namespaces to organize the collections created by automation developers in your organization. Create namespaces, upload collections and add additional information and resources that help your end users in their automation tasks. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/curating_collections_using_namespaces_in_automation_hub/index |
Chapter 5. Control plane architecture | Chapter 5. Control plane architecture The control plane , which is composed of control plane machines, manages the OpenShift Container Platform cluster. The control plane machines manage workloads on the compute machines, which are also known as worker machines. The cluster itself manages all upgrades to the machines by the actions of the Cluster Version Operator (CVO), the Machine Config Operator, and a set of individual Operators. 5.1. Node configuration management with machine config pools Machines that run control plane components or user workloads are divided into groups based on the types of resources they handle. These groups of machines are called machine config pools (MCP). Each MCP manages a set of nodes and its corresponding machine configs. The role of the node determines which MCP it belongs to; the MCP governs nodes based on its assigned node role label. Nodes in an MCP have the same configuration; this means nodes can be scaled up and torn down in response to increased or decreased workloads. By default, there are two MCPs created by the cluster when it is installed: master and worker . Each default MCP has a defined configuration applied by the Machine Config Operator (MCO), which is responsible for managing MCPs and facilitating MCP upgrades. You can create additional MCPs, or custom pools, to manage nodes that have custom use cases that extend outside of the default node types. Custom pools are pools that inherit their configurations from the worker pool. They use any machine config targeted for the worker pool, but add the ability to deploy changes only targeted at the custom pool. Since a custom pool inherits its configuration from the worker pool, any change to the worker pool is applied to the custom pool as well. Custom pools that do not inherit their configurations from the worker pool are not supported by the MCO. Note A node can only be included in one MCP. If a node has multiple labels that correspond to several MCPs, like worker,infra , it is managed by the infra custom pool, not the worker pool. Custom pools take priority on selecting nodes to manage based on node labels; nodes that do not belong to a custom pool are managed by the worker pool. It is recommended to have a custom pool for every node role you want to manage in your cluster. For example, if you create infra nodes to handle infra workloads, it is recommended to create a custom infra MCP to group those nodes together. If you apply an infra role label to a worker node so it has the worker,infra dual label, but do not have a custom infra MCP, the MCO considers it a worker node. If you remove the worker label from a node and apply the infra label without grouping it in a custom pool, the node is not recognized by the MCO and is unmanaged by the cluster. Important Any node labeled with the infra role that is only running infra workloads is not counted toward the total number of subscriptions. The MCP managing an infra node is mutually exclusive from how the cluster determines subscription charges; tagging a node with the appropriate infra role and using taints to prevent user workloads from being scheduled on that node are the only requirements for avoiding subscription charges for infra workloads. The MCO applies updates for pools independently; for example, if there is an update that affects all pools, nodes from each pool update in parallel with each other. If you add a custom pool, nodes from that pool also attempt to update concurrently with the master and worker nodes. 5.2. Machine roles in OpenShift Container Platform OpenShift Container Platform assigns hosts different roles. These roles define the function of the machine within the cluster. The cluster contains definitions for the standard master and worker role types. Note The cluster also contains the definition for the bootstrap role. Because the bootstrap machine is used only during cluster installation, its function is explained in the cluster installation documentation. 5.2.1. Control plane and node host compatibility The OpenShift Container Platform version must match between control plane host and node host. For example, in a 4.9 cluster, all control plane hosts must be 4.9 and all nodes must be 4.9. Temporary mismatches during cluster upgrades are acceptable. For example, when upgrading from OpenShift Container Platform 4.8 to 4.9, some nodes will upgrade to 4.9 before others. Prolonged skewing of control plane hosts and node hosts might expose older compute machines to bugs and missing features. Users should resolve skewed control plane hosts and node hosts as soon as possible. The kubelet service must not be newer than kube-apiserver , and can be up to two minor versions older depending on whether your OpenShift Container Platform version is odd or even. The table below shows the appropriate version compatibility: OpenShift Container Platform version Supported kubelet skew Odd OpenShift Container Platform minor versions [1] Up to one version older Even OpenShift Container Platform minor versions [2] Up to two versions older For example, OpenShift Container Platform 4.5, 4.7, 4.9. For example, OpenShift Container Platform 4.6, 4.8, 4.10. 5.2.2. Cluster workers In a Kubernetes cluster, the worker nodes are where the actual workloads requested by Kubernetes users run and are managed. The worker nodes advertise their capacity and the scheduler, which is part of the master services, determines on which nodes to start containers and pods. Important services run on each worker node, including CRI-O, which is the container engine, Kubelet, which is the service that accepts and fulfills requests for running and stopping container workloads, and a service proxy, which manages communication for pods across workers. In OpenShift Container Platform, machine sets control the worker machines. Machines with the worker role drive compute workloads that are governed by a specific machine pool that autoscales them. Because OpenShift Container Platform has the capacity to support multiple machine types, the worker machines are classed as compute machines. In this release, the terms worker machine and compute machine are used interchangeably because the only default type of compute machine is the worker machine. In future versions of OpenShift Container Platform, different types of compute machines, such as infrastructure machines, might be used by default. Note Machine sets are groupings of machine resources under the machine-api namespace. Machine sets are configurations that are designed to start new machines on a specific cloud provider. Conversely, machine config pools (MCPs) are part of the Machine Config Operator (MCO) namespace. An MCP is used to group machines together so the MCO can manage their configurations and facilitate their upgrades. 5.2.3. Cluster masters In a Kubernetes cluster, the control plane nodes run services that are required to control the Kubernetes cluster. In OpenShift Container Platform, the control plane machines are the control plane. They contain more than just the Kubernetes services for managing the OpenShift Container Platform cluster. Because all of the machines with the control plane role are control plane machines, the terms master and control plane are used interchangeably to describe them. Instead of being grouped into a machine set, control plane machines are defined by a series of standalone machine API resources. Extra controls apply to control plane machines to prevent you from deleting all control plane machines and breaking your cluster. Note Exactly three control plane nodes must be used for all production deployments. Services that fall under the Kubernetes category on the master include the Kubernetes API server, etcd, the Kubernetes controller manager, and the Kubernetes scheduler. Table 5.1. Kubernetes services that run on the control plane Component Description Kubernetes API server The Kubernetes API server validates and configures the data for pods, services, and replication controllers. It also provides a focal point for the shared state of the cluster. etcd etcd stores the persistent master state while other components watch etcd for changes to bring themselves into the specified state. Kubernetes controller manager The Kubernetes controller manager watches etcd for changes to objects such as replication, namespace, and service account controller objects, and then uses the API to enforce the specified state. Several such processes create a cluster with one active leader at a time. Kubernetes scheduler The Kubernetes scheduler watches for newly created pods without an assigned node and selects the best node to host the pod. There are also OpenShift services that run on the control plane, which include the OpenShift API server, OpenShift controller manager, OpenShift OAuth API server, and OpenShift OAuth server. Table 5.2. OpenShift services that run on the control plane Component Description OpenShift API server The OpenShift API server validates and configures the data for OpenShift resources, such as projects, routes, and templates. The OpenShift API server is managed by the OpenShift API Server Operator. OpenShift controller manager The OpenShift controller manager watches etcd for changes to OpenShift objects, such as project, route, and template controller objects, and then uses the API to enforce the specified state. The OpenShift controller manager is managed by the OpenShift Controller Manager Operator. OpenShift OAuth API server The OpenShift OAuth API server validates and configures the data to authenticate to OpenShift Container Platform, such as users, groups, and OAuth tokens. The OpenShift OAuth API server is managed by the Cluster Authentication Operator. OpenShift OAuth server Users request tokens from the OpenShift OAuth server to authenticate themselves to the API. The OpenShift OAuth server is managed by the Cluster Authentication Operator. Some of these services on the control plane machines run as systemd services, while others run as static pods. Systemd services are appropriate for services that you need to always come up on that particular system shortly after it starts. For control plane machines, those include sshd, which allows remote login. It also includes services such as: The CRI-O container engine (crio), which runs and manages the containers. OpenShift Container Platform 4.9 uses CRI-O instead of the Docker Container Engine. Kubelet (kubelet), which accepts requests for managing containers on the machine from master services. CRI-O and Kubelet must run directly on the host as systemd services because they need to be running before you can run other containers. The installer-* and revision-pruner-* control plane pods must run with root permissions because they write to the /etc/kubernetes directory, which is owned by the root user. These pods are in the following namespaces: openshift-etcd openshift-kube-apiserver openshift-kube-controller-manager openshift-kube-scheduler 5.3. Operators in OpenShift Container Platform Operators are among the most important components of OpenShift Container Platform. Operators are the preferred method of packaging, deploying, and managing services on the control plane. They can also provide advantages to applications that users run. Operators integrate with Kubernetes APIs and CLI tools such as kubectl and oc commands. They provide the means of monitoring applications, performing health checks, managing over-the-air (OTA) updates, and ensuring that applications remain in your specified state. Operators also offer a more granular configuration experience. You configure each component by modifying the API that the Operator exposes instead of modifying a global configuration file. Because CRI-O and the Kubelet run on every node, almost every other cluster function can be managed on the control plane by using Operators. Components that are added to the control plane by using Operators include critical networking and credential services. While both follow similar Operator concepts and goals, Operators in OpenShift Container Platform are managed by two different systems, depending on their purpose: Cluster Operators, which are managed by the Cluster Version Operator (CVO), are installed by default to perform cluster functions. Optional add-on Operators, which are managed by Operator Lifecycle Manager (OLM), can be made accessible for users to run in their applications. 5.3.1. Cluster Operators In OpenShift Container Platform, all cluster functions are divided into a series of default cluster Operators . Cluster Operators manage a particular area of cluster functionality, such as cluster-wide application logging, management of the Kubernetes control plane, or the machine provisioning system. Cluster Operators are represented by a ClusterOperator object, which cluster administrators can view in the OpenShift Container Platform web console from the Administration Cluster Settings page. Each cluster Operator provides a simple API for determining cluster functionality. The Operator hides the details of managing the lifecycle of that component. Operators can manage a single component or tens of components, but the end goal is always to reduce operational burden by automating common actions. Additional resources Cluster Operators reference 5.3.2. Add-on Operators Operator Lifecycle Manager (OLM) and OperatorHub are default components in OpenShift Container Platform that help manage Kubernetes-native applications as Operators. Together they provide the system for discovering, installing, and managing the optional add-on Operators available on the cluster. Using OperatorHub in the OpenShift Container Platform web console, cluster administrators and authorized users can select Operators to install from catalogs of Operators. After installing an Operator from OperatorHub, it can be made available globally or in specific namespaces to run in user applications. Default catalog sources are available that include Red Hat Operators, certified Operators, and community Operators. Cluster administrators can also add their own custom catalog sources, which can contain a custom set of Operators. Developers can use the Operator SDK to help author custom Operators that take advantage of OLM features, as well. Their Operator can then be bundled and added to a custom catalog source, which can be added to a cluster and made available to users. Note OLM does not manage the cluster Operators that comprise the OpenShift Container Platform architecture. Additional resources For more details on running add-on Operators in OpenShift Container Platform, see the Operators guide sections on Operator Lifecycle Manager (OLM) and OperatorHub . For more details on the Operator SDK, see Developing Operators . 5.4. About the Machine Config Operator OpenShift Container Platform 4.9 integrates both operating system and cluster management. Because the cluster manages its own updates, including updates to Red Hat Enterprise Linux CoreOS (RHCOS) on cluster nodes, OpenShift Container Platform provides an opinionated lifecycle management experience that simplifies the orchestration of node upgrades. OpenShift Container Platform employs three daemon sets and controllers to simplify node management. These daemon sets orchestrate operating system updates and configuration changes to the hosts by using standard Kubernetes-style constructs. They include: The machine-config-controller , which coordinates machine upgrades from the control plane. It monitors all of the cluster nodes and orchestrates their configuration updates. The machine-config-daemon daemon set, which runs on each node in the cluster and updates a machine to configuration as defined by machine config and as instructed by the MachineConfigController. When the node detects a change, it drains off its pods, applies the update, and reboots. These changes come in the form of Ignition configuration files that apply the specified machine configuration and control kubelet configuration. The update itself is delivered in a container. This process is key to the success of managing OpenShift Container Platform and RHCOS updates together. The machine-config-server daemon set, which provides the Ignition config files to control plane nodes as they join the cluster. The machine configuration is a subset of the Ignition configuration. The machine-config-daemon reads the machine configuration to see if it needs to do an OSTree update or if it must apply a series of systemd kubelet file changes, configuration changes, or other changes to the operating system or OpenShift Container Platform configuration. When you perform node management operations, you create or modify a KubeletConfig custom resource (CR). Important When changes are made to a machine configuration, the Machine Config Operator (MCO) automatically reboots all corresponding nodes in order for the changes to take effect. To prevent the nodes from automatically rebooting after machine configuration changes, before making the changes, you must pause the autoreboot process by setting the spec.paused field to true in the corresponding machine config pool. When paused, machine configuration changes are not applied until you set the spec.paused field to false and the nodes have rebooted into the new configuration. Make sure the pools are unpaused when the CA certificate rotation happens. If the MCPs are paused, the MCO cannot push the newly rotated certificates to those nodes. This causes the cluster to become degraded and causes failure in multiple oc commands, including oc debug , oc logs , oc exec , and oc attach . You receive alerts in the Alerting UI of the OpenShift Container Platform web console if an MCP is paused when the certificates are rotated. The following modifications do not trigger a node reboot: When the MCO detects any of the following changes, it applies the update without draining or rebooting the node: Changes to the SSH key in the spec.config.passwd.users.sshAuthorizedKeys parameter of a machine config. Changes to the global pull secret or pull secret in the openshift-config namespace. Automatic rotation of the /etc/kubernetes/kubelet-ca.crt certificate authority (CA) by the Kubernetes API Server Operator. When the MCO detects changes to the /etc/containers/registries.conf file, such as adding or editing an ImageDigestMirrorSet or ImageTagMirrorSet object, it drains the corresponding nodes, applies the changes, and uncordons the nodes.The node drain does not happen for the following changes: The addition of a registry with the pull-from-mirror = "digest-only" parameter set for each mirror. The addition of a mirror with the pull-from-mirror = "digest-only" parameter set in a registry. The addition of items to the unqualified-search-registries list. Additional information For information on preventing the control plane machines from rebooting after the Machine Config Operator makes changes to the machine config, see Disabling Machine Config Operator from automatically rebooting . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/architecture/control-plane |
Chapter 14. Authenticating third-party clients through RH-SSO | Chapter 14. Authenticating third-party clients through RH-SSO To use the different remote services provided by Business Central or by KIE Server, your client, such as curl, wget, web browser, or a custom REST client, must authenticate through the RH-SSO server and have a valid token to perform the requests. To use the remote services, the authenticated user must have the following roles: rest-all for using Business Central remote services. kie-server for using the KIE Server remote services. Use the RH-SSO Admin Console to create these roles and assign them to the users that will consume the remote services. Your client can authenticate through RH-SSO using one of these options: Basic authentication, if it is supported by the client Token-based authentication 14.1. Basic authentication If you enabled basic authentication in the RH-SSO client adapter configuration for both Business Central and KIE Server, you can avoid the token grant and refresh calls and call the services as shown in the following examples: For web based remote repositories endpoint: For KIE Server: | [
"curl http://admin:password@localhost:8080/business-central/rest/repositories",
"curl http://admin:password@localhost:8080/kie-server/services/rest/server/"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/sso-third-party-proc_execution-server |
2.3. Installing the Minimum Amount of Packages Required | 2.3. Installing the Minimum Amount of Packages Required It is best practice to install only the packages you will use because each piece of software on your computer could possibly contain a vulnerability. If you are installing from the DVD media, take the opportunity to select exactly what packages you want to install during the installation. If you find you need another package, you can always add it to the system later. For more information about installing the Minimal install environment, see the Software Selection chapter of the Red Hat Enterprise Linux 7 Installation Guide. A minimal installation can also be performed by a Kickstart file using the --nobase option. For more information about Kickstart installations, see the Package Selection section from the Red Hat Enterprise Linux 7 Installation Guide. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-installing_the_minimum_amount_of_packages_required |
4.6.3. EDIT MONITORING SCRIPTS Subsection | 4.6.3. EDIT MONITORING SCRIPTS Subsection Click on the MONITORING SCRIPTS link at the top of the page. The EDIT MONITORING SCRIPTS subsection allows the administrator to specify a send/expect string sequence to verify that the service for the virtual server is functional on each real server. It is also the place where the administrator can specify customized scripts to check services requiring dynamically changing data. Figure 4.9. The EDIT MONITORING SCRIPTS Subsection Sending Program For more advanced service verification, you can use this field to specify the path to a service-checking script. This functionality is especially helpful for services that require dynamically changing data, such as HTTPS or SSL. To use this functionality, you must write a script that returns a textual response, set it to be executable, and type the path to it in the Sending Program field. Note To ensure that each server in the real server pool is checked, use the special token %h after the path to the script in the Sending Program field. This token is replaced with each real server's IP address as the script is called by the nanny daemon. The following is a sample script to use as a guide when composing an external service-checking script: Note If an external program is entered in the Sending Program field, then the Send field is ignored. Send Enter a string for the nanny daemon to send to each real server in this field. By default the send field is completed for HTTP. You can alter this value depending on your needs. If you leave this field blank, the nanny daemon attempts to open the port and assume the service is running if it succeeds. Only one send sequence is allowed in this field, and it can only contain printable, ASCII characters as well as the following escape characters: \n for new line. \r for carriage return. \t for tab. \ to escape the character which follows it. Expect Enter a the textual response the server should return if it is functioning properly. If you wrote your own sending program, enter the response you told it to send if it was successful. Note To determine what to send for a given service, you can open a telnet connection to the port on a real server and see what is returned. For instance, FTP reports 220 upon connecting, so could enter quit in the Send field and 220 in the Expect field. Warning Remember to click the ACCEPT button after making any changes in this panel. To make sure you do not lose any changes when selecting a new panel. Once you have configured virtual servers using the Piranha Configuration Tool , you must copy specific configuration files to the backup LVS router. See Section 4.7, "Synchronizing Configuration Files" for details. | [
"#!/bin/sh TEST=`dig -t soa example.com @USD1 | grep -c dns.example.com if [ USDTEST != \"1\" ]; then echo \"OK else echo \"FAIL\" fi"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s2-piranha-virtservs-ems-vsa |
Chapter 3. Updating Ansible Automation Platform on OpenShift Container Platform | Chapter 3. Updating Ansible Automation Platform on OpenShift Container Platform You can use an upgrade patch to update your operator-based Ansible Automation Platform. 3.1. Patch updating Ansible Automation Platform on OpenShift Container Platform When you perform a patch update for an installation of Ansible Automation Platform on OpenShift Container Platform, most updates happen within a channel: A new update becomes available in the marketplace (through the redhat-operator CatalogSource). A new InstallPlan is automatically created for your Ansible Automation Platform subscription. If the subscription is set to Manual, the InstallPlan will need to be manually approved in the OpenShift UI. If the subscription is set to Automatic, it will upgrade as soon as the new version is available. Note It is recommended that you set a manual install strategy on your Ansible Automation Platform Operator subscription (set when installing or upgrading the Operator) and you will be prompted to approve an upgrade when it becomes available in your selected update channel. Stable channels for each X.Y release (for example, stable-2.5) are available. A new Subscription, CSV, and Operator containers will be created alongside the old Subscription, CSV, and containers. Then the old resources will be cleaned up if the new install was successful. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/installing_on_openshift_container_platform/update-ocp |
7.114. libhbalinux | 7.114. libhbalinux 7.114.1. RHBA-2013:0415 - libhbalinux bug fix and enhancement update Updated libhbalinux packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libhbalinux package contains the Host Bus Adapter API (HBAAPI) vendor library which uses standard kernel interfaces to obtain information about Fiber Channel Host Buses (FC HBA) in the system. Note The libhbalinux packages have been upgraded to upstream version 1.0.14, which provides a number of bug fixes and enhancements over the version. (BZ#819936) All users of libhbalinux are advised to upgrade to these updated libhbalinux packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/libhbalinux |
Chapter 1. Migrating applications to Red Hat build of Quarkus 3.15 | Chapter 1. Migrating applications to Red Hat build of Quarkus 3.15 As an application developer, you can migrate applications that are based on Red Hat build of Quarkus version 3.2 or later to version 3.15 by using either the quarkus CLI or Maven. Important The Quarkus CLI is intended for development purposes, including tasks such as creating, updating, and building Quarkus projects. However, Red Hat does not support using the Quarkus CLI in production environments. 1.1. Updating projects to the latest Red Hat build of Quarkus version To update your Red Hat build of Quarkus projects to the latest version, follow these steps, which are explained in detail later in this guide: Use the quarkus CLI or Maven commands to run automated update tasks. Consult the Changes that affect compatibility with earlier versions section to perform any manual update tasks. 1.1.1. Automatic updates Running the quarkus CLI or Maven commands triggers OpenRewrite recipes that upgrade project dependencies and source code. This automated approach provides a convenient and reliable way to update your projects. However, not all migration tasks are automated. If specific updates are not applied after running the quarkus update command or its Maven equivalent, consider the following possible reasons: The required migration task is not covered by the available OpenRewrite recipes. An extension your project depends on is incompatible with the latest Red Hat build of Quarkus version. 1.1.2. Manual updates Manual updates give you the flexibility and control to address any migration tasks to ensure your project aligns with your specific needs. Tasks that are not automated must be handled manually. For a list of the migration tasks required to update from the release to this one, see the Changes That Affect Compatibility with Earlier Versions section of this guide. Reviewing the migration guide for each release version between the current version of your application project and the version you're upgrading to is essential. This review process ensures you are fully informed and prepared for the update process. For example, if upgrading from version 3.8 to 3.15, you only need to review this guide. If you are upgrading from version 3.2 to 3.15, you also need to review the intermediate version of this guide: Migrating applications to Red Hat build of Quarkus 3.8 guide. Each task in this migration guide outlines the required changes and indicates whether they are automatically handled by the quarkus update command and its Maven equivalent. For additional background, see the Quarkus community Migration guides . 1.2. Using the quarkus CLI to update the project Update your Red Hat build of Quarkus projects by using the quarkus CLI. Important The Quarkus CLI is intended for development purposes, including tasks such as creating, updating, and building Quarkus projects. However, Red Hat does not support using the Quarkus CLI in production environments. Prerequisites An IDE JDK 17 or 21 installed and JAVA_HOME configured Apache Maven 3.8.6 or later Optional: To build native Linux executables , Red Hat build of Quarkus supports using the Red Hat build of Quarkus Native Builder image (quarkus/mandrel-for-jdk-21-rhel8) , which is based on GraalVM Mandrel . The quarkus CLI 3.15.3 A project based on Red Hat build of Quarkus version 3.2 or later Procedure Create a working branch for your project in your version control system. Install the latest version of the quarkus CLI by following the installation guide . Verify the installation by running the following command: quarkus -v 3.15.3 Important : Configure the extension registry client as instructed in the Configuring Red Hat build of Quarkus extension registry client section of the "Getting Started with Red Hat build of Quarkus" guide. In the terminal, navigate to your project directory. Update the project: quarkus update Optional: To update to a specific stream, use the --stream option followed by a specific version; for example: quarkus update --stream=3.15 Review the output from the update command for instructions and perform any suggested tasks. Use a diff tool to inspect all changes made during the update process. Manually perform any changes that were not handled by updating the project. For details, refer to the following Changes that affect compatibility with earlier versions section. Ensure the project builds without errors, all tests pass, and the application functions as expected before deploying to production. 1.3. Using Maven to update the project Update your Red Hat build of Quarkus projects by using Maven. Prerequisites An IDE JDK 17 or 21 installed and JAVA_HOME configured Apache Maven 3.8.6 or later Optional: To build native Linux executables , Red Hat build of Quarkus supports using the Red Hat build of Quarkus Native Builder image (quarkus/mandrel-for-jdk-21-rhel8) , which is based on GraalVM Mandrel . A project based on Red Hat build of Quarkus version 3.2 or later Procedure Create a working branch for your project in your version control system. Important : Configure the extension registry client as detailed in the Configuring Red Hat build of Quarkus extension registry client section of the "Getting Started with Red Hat build of Quarkus" guide. Open a terminal and navigate to your project directory. Ensure that the Red Hat build of Quarkus Maven plugin version is aligned with the latest supported version. Configure the project according to the Getting started with Red Hat build of Quarkus guide, and then run: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.3.SP1-redhat-00002:update Optional: To update to a specific stream, use the -Dstream option followed by the desired version; for example: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.3.SP1-redhat-00002:update -Dstream=3.15 Review the output from the update command for any instructions and perform the suggested tasks. Use a diff tool to examine all changes made during the update process. Manually perform any changes that were not handled by updating the project. For details, refer to the following Changes that affect compatibility with earlier versions section. Ensure the project builds without errors, all tests pass, and the application functions as expected before deploying to production. 1.4. Changes that affect compatibility with earlier versions This section describes changes in Red Hat build of Quarkus 3.15 that affect the compatibility of applications built with earlier product versions. Review these breaking changes and take the necessary steps to ensure that your applications continue functioning after updating them to Red Hat build of Quarkus 3.15. You can perform many of the updates listed in this section by running quarkus update or the equivalent Maven command. This triggers automated OpenRewrite recipes that update your project dependencies and source code to the latest Red Hat build of Quarkus version. However, not all migration tasks are automated. If specific updates aren't applied by the automated update, it might be because the required migration task isn't covered by the available OpenRewrite recipes or an extension your project relies on is incompatible with the latest version. In such cases, you need to perform the updates manually. Be sure to review the following items to identify and address any manual migration tasks. 1.4.1. Compatibility 1.4.1.1. Spring compatibility layer updated to align with Spring Boot 3 With Red Hat build of Quarkus 3.15, the quarkus-spring-data-rest-extension was upgraded to align with the latest Spring Boot 3 API updates. If you use the PagingAndSortingRepository interface in your code, this upgrade might introduce breaking changes. Previously, the PagingAndSortingRepository extended the CrudRepository , which allowed your custom repository to inherit the methods from the CrudRepository . To resolve any issues related to this change, you must update your custom repository to extend either ListCrudRepository or CrudRepository directly. This adjustment ensures continued access to the required methods and maintains compatibility with the updated architecture. 1.4.2. Core 1.4.2.1. Dev Services startup detection change In Red Hat build of Quarkus 3.15, the method for determining whether Dev Services should start has changed. Previously, to decide whether to start Dev Services, Red Hat build of Quarkus checked if a configuration property was defined without expanding it. However, this approach caused issues if the property expanded to an empty value. Now, Red Hat build of Quarkus first checks whether the expanded property is empty. This change might result in Dev Services starting unexpectedly where they didn't before, in cases when the expanded property ends up being empty. Most Dev Services start when a given property is not provided, for instance, the JDBC URL. Due to this change, you should adjust your configuration properties to include a default value, ensuring that the expanded property is not empty. For example, if you use the quarkus-test-oidc-server component to mock the OpenID Connect (OIDC) server, and your application.properties file contains: Change the property value to: This way, if keycloak.url is not defined, the default replaced-by-test-resource value prevents the property from expanding to an empty value, thereby avoiding the unintended startup of Dev Services. If a variable is not defined in an expression, the whole expression will be empty. Note For this particular example, this change is applied automatically by running quarkus update . However, for other use cases, you might need to apply the change manually. 1.4.2.2. GraalVM SDK updates Red Hat build of Quarkus 3.15 updates the GraalVM SDK dependencies to version 23.1.2, correcting an earlier oversight and ensuring compatibility with the latest GraalVM features. If you develop extensions with GraalVM substitutions, replace the org.graalvm.sdk:graal-sdk dependency with org.graalvm.sdk:nativeimage . The nativeimage artifact includes only the required classes for substitutions, making it more streamlined. You can apply this change by running the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. Important JDK 17 requirement: GraalVM SDK 23.1.2 requires a minimum of JDK 17 at runtime. 1.4.2.3. Infinispan 15 component upgrade In Red Hat build of Quarkus 3.15, the Infinispan 15 component is upgraded to version 15.0. In earlier releases, before Infinispan 15 integration, you ran queries by using the following .Query.java code: @Inject RemoteCache<String, Book> booksCache; ... QueryFactory queryFactory = Search.getQueryFactory(booksCache); Query query = queryFactory.create("from book_sample.Book"); List<Book> list = query.execute().list(); However, with this release, this code no longer works because RemoteCache is now an @ApplicationScoped proxy bean and Search.getQueryFactory raises a ClassCastException . To resolve this, remove the indirection by using the query method in the RemoteCache API as follows: @Inject RemoteCache<String, Book> booksCache; ... Query<Book> query = booksCache.<Book>query("from book_sample.Book"); List<Book> list = query.execute().list(); You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. 1.4.2.4. Packaging configuration changes In Red Hat build of Quarkus 3.15, the following packaging-related properties have been renamed or changed. If you use the original properties in your configuration, they still work, but a warning is displayed. You can apply this change by running the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. Table 1.1. Update the following properties in your configuration Original names Current names quarkus.package.type For JAR builds, use quarkus.package.jar.type with a valid JAR type: fast-jar , uber-jar , mutable-jar , or legacy-jar (deprecated). For native builds, set quarkus.native.enabled to true . For native sources builds, also set quarkus.native.sources-only to true . You can also disable JAR building by setting quarkus.package.jar.enabled to false . quarkus.package.create-appcds quarkus.package.jar.appcds.enabled quarkus.package.appcds-builder-image quarkus.package.jar.appcds.builder-image quarkus.package.appcds-use-container quarkus.package.jar.appcds.use-container quarkus.package.compress-jar quarkus.package.jar.compress quarkus.package.filter-optional-dependencies quarkus.package.jar.filter-optional-dependencies quarkus.package.add-runner-suffix quarkus.package.jar.add-runner-suffix Note: This configuration property generally only applies when building uber-JARs. quarkus.package.user-configured-ignored-entries quarkus.package.jar.user-configured-ignored-entries quarkus.package.user-providers-directory quarkus.package.jar.user-providers-directory quarkus.package.included-optional-dependencies quarkus.package.jar.included-optional-dependencies quarkus.package.include-dependency-list quarkus.package.jar.include-dependency-list quarkus.package.decompiler.version , quarkus.package.vineflower.version No replacement; these properties are now ignored. quarkus.package.decompiler.enabled , quarkus.package.vineflower.enabled quarkus.package.jar.decompiler.enabled quarkus.package.decompiler.jar-directory , quarkus.package.vineflower.jar-directory quarkus.package.jar.decompiler.jar-directory quarkus.package.manifest.attributes.* quarkus.package.jar.manifest.attributes.* quarkus.package.manifest.sections.*.* quarkus.package.jar.manifest.sections.*.* quarkus.package.manifest.add-implementation-entries quarkus.package.jar.manifest.add-implementation-entries Update any code and configuration files to reflect these changes and stop warning messages during the build process. 1.4.2.5. ProfileManager and ProfileManager#getActiveProfile removed In Red Hat build of Quarkus 3.15, the deprecated ProfileManager class and ProfileManager#getActiveProfile method are removed because the ProfileManager did not handle multiple profiles. With this removal, the following configuration change is required: To retrieve an active profile, use the io.quarkus.runtime.configuration.ConfigUtils#getProfiles API. You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. 1.4.2.6. The quarkus-app directory is now created only for fast-jar packages In earlier releases, an unintended behavior was observed where a quarkus-app directory was always generated in the build system's output directory, regardless of the type of artifact produced. In Red Hat build of Quarkus 3.15, the build process creates the quarkus-app directory only for fast-jar packages, which is the default artifact type. If you configure the build for a different artifact type, the quarkus-app directory is not created. 1.4.2.7. Required adjustments for extension developers In Red Hat build of Quarkus 3.15, the extension annotation processor used to generate runtime files and configuration documentation has been redeveloped, offering more flexibility but with new constraints: Mixing the legacy @ConfigRoot with the new @ConfigMapping approach in the same module is no longer allowed. If you use only @ConfigMapping , you do not need to make any changes. For legacy @ConfigRoot , inform the annotation processor by adding the following to your maven-compiler-plugin configuration: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <annotationProcessorPaths> <path> <groupId>io.quarkus</groupId> <artifactId>quarkus-extension-processor</artifactId> <version>USD{quarkus.version}</version> </path> </annotationProcessorPaths> <compilerArgs> <arg>-AlegacyConfigRoot=true</arg> </compilerArgs> </configuration> </plugin> If no configuration annotations exist in the test classes, this flag can trigger warnings when compiling test classes. To avoid this, enable the annotation processor only for default-compile execution: <plugin> <artifactId>maven-compiler-plugin</artifactId> <executions> <execution> <id>default-compile</id> <configuration> <annotationProcessorPaths> <path> <groupId>io.quarkus</groupId> <artifactId>quarkus-extension-processor</artifactId> <version>USD{quarkus.version}</version> </path> </annotationProcessorPaths> <compilerArgs> <arg>-AlegacyConfigRoot=true</arg> </compilerArgs> </configuration> </execution> </executions> </plugin> Moving to the new @ConfigMapping interface is encouraged, but support for legacy @ConfigRoot classes is planned to continue for a while to ensure a smooth migration. Deprecation plans are planned to be announced at a later date. You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. 1.4.3. Data 1.4.3.1. Database version verified on startup With Red Hat build of Quarkus 3.15, the Hibernate ORM extension verifies that the database version it connects to at runtime is at least as high as the one configured at build time. This verification occurs even when the configuration relies on default settings targeting the minimum database versions supported by Red Hat build of Quarkus. This update aims to alert application developers aware if they try to use a database version that Hibernate ORM or Red Hat build of Quarkus no longer support. If you try to use a database version that is no longer supported, Red Hat build of Quarkus does not start and throws an exception. This change affects applications that rely on the following database versions: DB2 earlier than version 10.5 Derby earlier than version 10.15.2 Oracle database earlier than version 19.0 MariaDB earlier than version 10.6 Microsoft SQL Server earlier than version 13 (2016) MySQL earlier than version 8.0 PostgreSQL earlier than version 12.0 If you cannot upgrade the database to a supported version, you can still try to use it, although some features might not work. To continue using an earlier, unsupported database version, set the db-version explicitly and, if necessary, set the dialect also. You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. For more information, see the following resources: Supported databases section of the Quarkus "Using Hibernate ORM and Jakarta Persistence" guide Hibernate ORM: Supported dialects Hibernate ORM: Community dialects 1.4.3.2. Dev Services default images updates In Red Hat build of Quarkus 3.15, the default images for several Dev Services have been updated to the following versions: PostgreSQL version 16 MySQL version 8.4 MongoDB version 7.0 Elasticsearch version 8.15 OpenSearch version 2.16 You can apply this change by running the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. To use a specific version or distribution of these services, you must manually override the default container image in your application properties. For more information, see the Configuring the image section of the Quarkus "Dev Services for Elasticsearch" guide. 1.4.3.3. Hibernate ORM auto-flush optimization In Red Hat build of Quarkus 3.15, the Hibernate Object/Relational Mapping (ORM) auto-flush feature is optimized. Now, by default, before running a Hibernate Query Language (HQL), Java Persistence Query Language (JPQL), or native query, Hibernate ORM flushes only pending changes to the database if it detects that these changes might impact the query results. In most cases, this optimization will improve the performance of your applications, such as with more efficient batching, however, the following issues might occur: If you are running a native query, auto-flushing requires you to specify relevant entity types on the query. For more information, see the Hibernate ORM user guide . If pending changes impact only the target tables of the foreign keys used in the query, but those target tables are not used in the query, auto flushing does not happen. If you want to revert to behavior and choose an alternative default value, the quarkus.hibernate-orm.flush.mode configuration property is introduced. Set this property to always : quarkus.hibernate-orm.flush.mode=always . You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. For more information, see the following resources: Hibernate ORM 6.6 migration guide Hibernate ORM 6.6 user guide 1.4.3.4. Hibernate ORM upgraded to version 6.6 In Red Hat build of Quarkus 3.15, Hibernate ORM extensions were upgraded to Hibernate ORM 6.6 and introduced the following breaking changes: Applications with certain forms of incorrect configuration, mapping, or queries might now also throw exceptions. Previously, they would issue only warnings or malfunctions. For example: An invalid creation attempt when merging an entity with a @GeneratedValue identifier or a @Version property set to a non-null value now fails. Previously, it created the entity in the database. For more information, see Merge versioned entity when row is deleted . Applying both @MappedSuperclass and @Embeddable on the same type is no longer allowed. For more information, see Explicit validation of annotated class types . Some features are now enabled by default, to avoid unexpected behaviors. For example: Embeddables with @Embeddable annotated subtypes now use discriminator-based inheritance by default. For more information, see link: Discriminator-based embeddable inheritance . In some instances, behavior was changed to comply with the Java Persistence API (JPA) specification. For more information, see Criteria:jakarta.persistence.criteria.Expression#as(Class) . For more information, see the following resources: Hibernate ORM 6.5 migration guide Hibernate ORM 6.6 migration guide Hibernate ORM 6.6 user guide 1.4.3.5. Hibernate Search database schema update for outbox-polling system tables The quarkus-hibernate-search-orm-outbox-polling extension relies on system tables in the database to which Hibernate ORM connects, and with Red Hat build of Quarkus 3.15, the schema of these system tables might change. If you use this extension, you need to migrate your database schema. For information about how to migrate your database schema, see the Outbox polling database tables section in the "Hibernate Search migration" guide. If you cannot update your database schema, apply the following settings to restore the defaults: For the default persistence unit, specify the following: quarkus.hibernate-search-orm.coordination.entity-mapping.agent.uuid-type=char quarkus.hibernate-search-orm.coordination.entity-mapping.outbox-event.uuid-type=char For named persistence units, specify the following: quarkus.hibernate-search-orm.<persistence-unit-name>.coordination.entity-mapping.agent.uuid-type=char quarkus.hibernate-search-orm.<persistence-unit-name>.coordination.entity-mapping.outbox-event.uuid-type=char You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. 1.4.3.6. Panache annotation processor removed for Hibernate ORM, Hibernate Reactive, and MongoDB In Red Hat build of Quarkus 3.15, the io.quarkus:quarkus-panache-common annotation processor has been removed because it is no longer required for externally defined entities when using Hibernate ORM with Panache, Hibernate Reactive with Panache, or MongoDB with Panache. In earlier releases, this annotation processor was automatically run when found in the classpath. If you had overridden the set of annotation processors in your build tool, you needed to add it explicitly. Remove all references to io.quarkus:quarkus-panache-common from the annotation processor list. You can apply this change by running the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. Maven users Find and remove the io.quarkus:quarkus-panache-common annotation processor from your pom.xml file: <build> <plugins> <!-- other plugins --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.13.0</version> <!-- Necessary for proper dependency management in annotationProcessorPaths --> <configuration> <annotationProcessorPaths> <path> <groupId>io.quarkus</groupId> <artifactId>quarkus-panache-common</artifactId> </path> </annotationProcessorPaths> </configuration> </plugin> <!-- other plugins --> </plugins> </build> Gradle users For Gradle builds, find and remove the io.quarkus:quarkus-panache-common annotation processor from your build.gradle file: dependencies { annotationProcessor "io.quarkus:quarkus-panache-common" } 1.4.4. Messaging 1.4.4.1. Changes to execution mode for synchronous methods in Quarkus Messaging In Red Hat build of Quarkus 3.15, the execution mode for synchronous methods in Quarkus Messaging extensions now defaults to worker threads. In earlier versions, these methods ran on the Vert.x event loop (I/O thread). For example, the following processing method is now called by default on a worker thread instead of the Vert.x I/O thread: package org.acme; import org.eclipse.microprofile.reactive.messaging.Incoming; import org.eclipse.microprofile.reactive.messaging.Outgoing; @Incoming("source") @Outgoing("sink") public Result process(int payload) { return new Result(payload); } To revert to the earlier behavior, you can use the quarkus.messaging.blocking.signatures.execution.mode configuration property. Possible values are: worker (default) event-loop (earlier behavior) virtual-thread You can also adjust the execution mode on a per-method basis by using the @Blocking and @NonBlocking annotations: package org.acme; import io.smallrye.common.annotation.NonBlocking; import org.eclipse.microprofile.reactive.messaging.Incoming; @Incoming("source") @NonBlocking public void consume(int payload) { // called on I/O thread } By annotating the method with @NonBlocking , you ensure it is called on the Vert.x event loop (I/O thread). Review your messaging methods to ensure they run in the required execution mode, and update your code or configuration as needed. You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. 1.4.4.2. SmallRye Reactive Messaging extensions renamed to Quarkus Messaging In Red Hat build of Quarkus 3.15, the SmallRye Reactive Messaging extensions have been renamed to quarkus-messaging-* to reflect their support for both reactive and blocking workloads. Maven relocations have been implemented. You can apply this change by running the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. Replace all instances of the old extension names in your projects with the new names. Renamed extensions The following extensions have been renamed: Old Name New Name quarkus-smallrye-reactive-messaging quarkus-messaging quarkus-smallrye-reactive-messaging-amqp quarkus-messaging-amqp quarkus-smallrye-reactive-messaging-kafka quarkus-messaging-kafka quarkus-smallrye-reactive-messaging-mqtt quarkus-messaging-mqtt quarkus-smallrye-reactive-messaging-pulsar quarkus-messaging-pulsar quarkus-smallrye-reactive-messaging-rabbitmq quarkus-messaging-rabbitmq The configuration root has also been updated from quarkus.smallrye-reactive-messaging. to quarkus.messaging. . An automatic fallback mechanism is in place to revert to the old configuration properties, if necessary. Impact on extension developers If you are an extension developer, note that the following deployment-related artifacts have also been renamed: Old Name New Name quarkus-smallrye-reactive-messaging-deployment quarkus-messaging-deployment quarkus-smallrye-reactive-messaging-kotlin quarkus-messaging-kotlin quarkus-smallrye-reactive-messaging-amqp-deployment quarkus-messaging-amqp-deployment quarkus-smallrye-reactive-messaging-kafka-deployment quarkus-messaging-kafka-deployment quarkus-smallrye-reactive-messaging-mqtt-deployment quarkus-messaging-mqtt-deployment quarkus-smallrye-reactive-messaging-pulsar-deployment quarkus-messaging-pulsar-deployment quarkus-smallrye-reactive-messaging-rabbitmq-deployment quarkus-messaging-rabbitmq-deployment Update your code and configurations to reflect these changes. 1.4.5. Observability 1.4.5.1. All quarkus.opentelemetry.* configuration properties removed In Red Hat build of Quarkus 3.15, all configuration properties under the quarkus.opentelemetry.* namespace have been removed. These properties were deprecated and maintained for backward compatibility in versions 3.2 and 3.8. Because they are now removed in 3.15, you must make the following changes to migrate your applications: Update all properties from the quarkus.opentelemetry. namespace to the quarkus.otel. namespace so they align with the OpenTelemetry Java autoconfigure conventions . Update the sampler property from quarkus.opentelemetry.tracer.sampler to quarkus.otel.traces.sampler . The parent-based sampler property quarkus.opentelemetry.tracer.sampler.parent-based has been removed. To mark the sampler as parent-based, specify it directly in the quarkus.otel.traces.sampler property, as shown below. Table 1.2. Value mapping from the original to the new sampler configuration Old value New value New value (parent-based) on always_on parentbased_always_on off always_off parentbased_always_off ratio traceidratio parentbased_traceidratio Update your code and configurations to use these new properties and values. You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. 1.4.5.2. OpenTelemetry io.opentelemetry.extension.annotations.WithSpan annotation removed In Red Hat build of Quarkus 3.15, the deprecated annotation io.opentelemetry.extension.annotations.WithSpan has been removed. Update your code to use the new io.opentelemetry.instrumentation.annotations.WithSpan annotation. Review and change any configurations or code that rely on the old annotation to ensure compatibility with this update. You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. 1.4.5.3. OpenTelemetry new semantic conventions for HTTP In Red Hat build of Quarkus 3.15, the OpenTelemetry (OTel) SDK has been upgraded to version 1.39.0 and instrumentation to version 2.5.0. This upgrade enforces the new conventions defined in the OpenTelemetry HTTP semantic convention stability migration . It also completes the removal of the deprecated standards. Update any code or configurations that depend on those conventions. Additionally, the quarkus.otel.semconv-stability.opt-in system property has been removed because opting-in is no longer supported. Update any code or configurations that depend on this property. You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. 1.4.5.4. OpenTelemetry REST client span name changes In Red Hat build of Quarkus 3.15, the OpenTelemetry (OTel) REST client span names now include both the HTTP request method and path; for example, GET /hello . Earlier REST client span names included only the HTTP method. Update any code or configurations that depend on the specific format of REST client span names. You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. 1.4.5.5. OpenTelemetry (OTel) span names generated by database operations changed In Red Hat build of Quarkus 3.15, the span names generated by database operations have changed due to updates in the libraries we use. For example, before this change, the old span name for creating a database table was DB Query . Now, with this change, the new span name is CREATE TABLE {table_name} . This new naming convention provides more descriptive and meaningful span names that accurately reflect the specific database operations performed. As a result, you might notice different names for spans generated by database operations in your observability tools. Review and update any custom monitoring configurations, alerts, or dashboards that rely on the old span names to accommodate the new naming convention. You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. For more information about the database span naming conventions and best practices, see the OpenTelemetry Database Span Semantic Conventions specification. 1.4.5.6. SmallRye health configuration properties relocated In earlier versions, some configuration properties were incorrectly located under the quarkus.health configuration root. In Red Hat build of Quarkus 3.15, those properties have relocated to the quarkus.smallrye-health configuration root for consistency: quarkus.health.extensions.enabled has been moved to quarkus.smallrye-health.extensions.enabled quarkus.health.openapi.included has been moved to quarkus.smallrye-health.openapi.included In Red Hat build of Quarkus 3.15, the properties with the old configuration roots have been deprecated and are planned to be removed in a future release. Update any code or configurations to use the new property locations. You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. 1.4.6. Security 1.4.6.1. Keystore and truststore default format changed With Red Hat build of Quarkus 3.15, Java Keystroke (JKS) is no longer the default keystore and truststore format. Instead, Red Hat build of Quarkus determines the format based on the file extension, as follows: .pem , .crt , and .key files are read as Privacy Enhanced Mail (PEM) certificates and keys .jks , .keystore , and .truststore files are read as JKS keystores and truststores .p12 , .pkcs12 , and .pfx files are read as PKCS12 keystores and truststores If your file does not use one of these extensions, you must set the file format. For example, to specify the JKS format, set the following configuration values: quarkus.http.ssl.certificate.key-store-file-type=JKS quarkus.http.ssl.certificate.trust-store-file-type=JKS Note To specify P12 or PEM formats, set P12 or PEM instead of JKS. 1.4.6.2. OpenID Connect (OIDC) UserInfo acquisition enforced when UserInfo is injected In Red Hat build of Quarkus 3.15, the quarkus.oidc.authentication.user-info-required property is now automatically set to true when io.quarkus.oidc.UserInfo is injected into a REST endpoint. This change removes the need to configure this property manually, because UserInfo is typically injected when its use is intended. However, in more complex setups where multiple OIDC tenants secure endpoints, and some tenants do not support UserInfo , this change might lead to tenant initialization failures. This happens only when a tenant that does not support UserInfo is still configured to acquire it, potentially causing requests secured by that tenant to fail. To avoid such failures in multi-tenant setups, set the tenant-specific quarkus.oidc.<tenant-id>.authentication.user-info-required property to false for tenants that do not support UserInfo . This ensures that only tenants that support UserInfo enforce its acquisition. You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. 1.4.7. Tooling 1.4.7.1. JUnit Pioneer version no longer enforced In Red Hat build of Quarkus 3.15, the Quarkus BOM no longer enforces the version of the org.junit-pioneer:junit-pioneer dependency. If you are using this dependency in your project, you must explicitly specify its version in your build files. To avoid any build issues, define the version in your pom.xml or build.gradle files; for example: pom.xml <properties> ... <junit-pioneer.version>2.2.0</junit-pioneer.version> ... </properties> You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. 1.4.8. Web 1.4.8.1. Quarkus REST: Adding web links programmatically In Red Hat build of Quarkus 3.15, for Quarkus REST (formerly RESTEasy Reactive), the signatures of the Hypertext Application Language (HAL) wrapper classes have been modified to include the type of elements in the collection: HalCollectionWrapper<T> HalEntityWrapper<T> Add the appropriate type argument wherever you use the HAL wrapper classes in your code. You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. New preferred method for creating HAL wrappers The preferred approach for creating HAL wrapper classes has changed. Instead of using constructors, use the helper methods provided by the io.quarkus.hal.HalService bean. For example: @Path("/records") public class RecordsResource { @Inject HalService halService; @GET @Produces({ MediaType.APPLICATION_JSON, RestMediaType.APPLICATION_HAL_JSON }) @RestLink(rel = "list") public HalCollectionWrapper<Record> getAll() { List<Record> list = // ... HalCollectionWrapper<Record> halCollection = halService.toHalCollectionWrapper( list, "collectionName", Record.class); // ... return halCollection; } @GET @Produces({ MediaType.APPLICATION_JSON, RestMediaType.APPLICATION_HAL_JSON }) @Path("/{id}") @RestLink(rel = "self") @InjectRestLinks(RestLinkType.INSTANCE) public HalEntityWrapper<Record> get(@PathParam("id") int id) { Record entity = // ... HalEntityWrapper<Record> halEntity = halService.toHalWrapper(entity); // ... return halEntity; } } Although creating HAL wrappers by using constructors still works, that approach might be deprecated in a future version. Therefore, start using the helper methods exposed by the HalService bean. For more information, see the Web Links support section of the "Quarkus REST" guide. 1.4.8.2. Quarkus REST filters on non-REST paths In earlier releases, Quarkus REST (formerly RESTEasy Reactive) filters ran even when the requested resource was not a REST resource. In Red Hat build of Quarkus 3.15, that behavior has changed. Now, Quarkus REST filters run only on REST resources. Note Quarkus REST is a Jakarta REST (formerly JAX-RS) implementation built from the ground up to work on Quarkus's common Vert.x layer. It is fully reactive and highly optimized for build-time processing. If you need your filters to apply to non-Quarkus REST resources, you can do so by adding a custom ExceptionMapper for NotFoundException , as shown in the following example: package io.quarkus.resteasy.reactive.server.test.customproviders; import jakarta.ws.rs.NotFoundException; import jakarta.ws.rs.core.Response; import jakarta.ws.rs.ext.ExceptionMapper; import jakarta.ws.rs.ext.Provider; @Provider public class NotFoundExeptionMapper implements ExceptionMapper<NotFoundException> { @Override public Response toResponse(NotFoundException exception) { return Response.status(404).build(); } } With this ExceptionMapper , Quarkus REST handles "Not Found" resources, allowing filters to run as expected. Earlier, filters always ran on non-REST resources, which made it difficult for other extensions to handle "Not Found" scenarios effectively. You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. 1.4.8.3. Qute REST integration update: TemplateInstance is blocking by default In Red Hat build of Quarkus 3.15, the io.quarkus.qute.TemplateInstance class is no longer registered as a non-blocking type. As a result, if a Jakarta REST (formerly JAX-RS) resource method returns a TemplateInstance object, it is now considered blocking by default. To restore the earlier non-blocking behavior, apply the @io.smallrye.common.annotation.NonBlocking annotation to the resource method. Note This change only affects applications using the Quarkus REST (formerly RESTEasy Reactive) extension, quarkus-rest . You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. 1.4.8.4. RESTEasy Reactive extensions renamed to Quarkus REST In Red Hat build of Quarkus 3.15, the RESTEasy Reactive extensions have been renamed to quarkus-rest-* to reflect their support for both reactive and blocking workloads. Maven relocations have been implemented. You can apply this change by running the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. Replace all instances of the old extension names in your projects with the new names. Most of the new extension names follow this naming convention: Extensions ending with -rest use Quarkus REST (formerly RESTEasy Reactive). Extensions ending with -resteasy use RESTEasy Classic. Renamed extensions The following extensions have been renamed: Old Name New Name quarkus-resteasy-reactive quarkus-rest quarkus-resteasy-reactive-jackson quarkus-rest-jackson quarkus-resteasy-reactive-jaxb quarkus-rest-jaxb quarkus-resteasy-reactive-jsonb quarkus-rest-jsonb quarkus-resteasy-reactive-kotlin quarkus-rest-kotlin quarkus-resteasy-reactive-kotlin-serialization quarkus-rest-kotlin-serialization quarkus-resteasy-reactive-links quarkus-rest-links quarkus-resteasy-reactive-qute quarkus-rest-qute quarkus-resteasy-reactive-servlet quarkus-rest-servlet quarkus-rest-client-reactive quarkus-rest-client quarkus-rest-client-reactive-jackson quarkus-rest-client-jackson quarkus-rest-client-reactive-jaxb quarkus-rest-client-jaxb quarkus-rest-client-reactive-jsonb quarkus-rest-client-jsonb quarkus-rest-client-reactive-kotlin-serialization quarkus-rest-client-kotlin-serialization quarkus-jaxrs-client-reactive quarkus-rest-client-jaxrs quarkus-keycloak-admin-client quarkus-keycloak-admin-resteasy-client quarkus-keycloak-admin-client-reactive quarkus-keycloak-admin-rest-client quarkus-oidc-client-filter quarkus-resteasy-client-oidc-filter quarkus-oidc-client-reactive-filter quarkus-rest-client-oidc-filter quarkus-oidc-token-propagation quarkus-resteasy-client-oidc-token-propagation quarkus-oidc-token-propagation-reactive quarkus-rest-client-oidc-token-propagation quarkus-csrf-reactive quarkus-rest-csrf quarkus-spring-web-resteasy-classic quarkus-spring-web-resteasy quarkus-spring-web-resteasy-reactive quarkus-spring-web-rest The configuration roots have also been updated: Old configuration root New configuration root quarkus.resteasy-reactive.* quarkus.rest.* quarkus.rest-client-reactive.* quarkus.rest-client.* quarkus.oidc-client-reactive-filter.* quarkus.rest-client-oidc-filter.* quarkus.oidc-token-propagation-reactive.* quarkus.rest-client-oidc-token-propagation.* quarkus.csrf-reactive.* quarkus.rest-csrf.* quarkus.oidc-client-filter.* quarkus.resteasy-client-oidc-filter.* quarkus.oidc-token-propagation.* quarkus.resteasy-client-oidc-token-propagation.* An automatic fallback mechanism is in place to revert to the old configuration properties if necessary. Impact on extension developers If you are an extension developer, note that the following deployment-related artifacts have also been renamed: Old Name New Name quarkus-resteasy-reactive-deployment quarkus-rest-deployment quarkus-resteasy-reactive-jackson-common quarkus-rest-jackson-common quarkus-resteasy-reactive-jackson-common-deployment quarkus-rest-jackson-common-deployment quarkus-resteasy-reactive-jackson-deployment quarkus-rest-jackson-deployment quarkus-resteasy-reactive-jaxb-common quarkus-rest-jaxb-common quarkus-resteasy-reactive-jaxb-common-deployment quarkus-rest-jaxb-common-deployment quarkus-resteasy-reactive-jaxb-deployment quarkus-rest-jaxb-deployment quarkus-resteasy-reactive-jsonb-common quarkus-rest-jsonb-common quarkus-resteasy-reactive-jsonb-common-deployment quarkus-rest-jsonb-common-deployment quarkus-resteasy-reactive-jsonb-deployment quarkus-rest-jsonb-deployment quarkus-resteasy-reactive-kotlin-deployment quarkus-rest-kotlin-deployment quarkus-resteasy-reactive-kotlin-serialization-common quarkus-rest-kotlin-serialization-common quarkus-resteasy-reactive-kotlin-serialization-common-deployment quarkus-rest-kotlin-serialization-common-deployment quarkus-resteasy-reactive-kotlin-serialization-deployment quarkus-rest-kotlin-serialization-deployment quarkus-resteasy-reactive-links-deployment quarkus-rest-links-deployment quarkus-resteasy-reactive-qute-deployment quarkus-rest-qute-deployment quarkus-resteasy-reactive-server-common quarkus-rest-server-common quarkus-resteasy-reactive-server-spi-deployment quarkus-rest-server-spi-deployment quarkus-resteasy-reactive-servlet-deployment quarkus-rest-servlet-deployment quarkus-resteasy-reactive-common quarkus-rest-common quarkus-resteasy-reactive-common-deployment quarkus-rest-common-deployment quarkus-rest-client-reactive-deployment quarkus-rest-client-deployment quarkus-rest-client-reactive-jackson-deployment quarkus-rest-client-jackson-deployment quarkus-rest-client-reactive-jaxb-deployment quarkus-rest-client-jaxb-deployment quarkus-rest-client-reactive-jsonb-deployment quarkus-rest-client-jsonb-deployment quarkus-rest-client-reactive-kotlin-serialization-deployment quarkus-rest-client-kotlin-serialization-deployment quarkus-rest-client-reactive-spi-deployment quarkus-rest-client-spi-deployment quarkus-jaxrs-client-reactive-deployment quarkus-rest-client-jaxrs-deployment quarkus-keycloak-admin-client-deployment quarkus-keycloak-admin-resteasy-client-deployment quarkus-keycloak-admin-client-reactive-deployment quarkus-keycloak-admin-rest-client-deployment quarkus-oidc-client-filter-deployment quarkus-resteasy-client-oidc-filter-deployment quarkus-oidc-client-reactive-filter-deployment quarkus-rest-client-oidc-filter-deployment quarkus-oidc-token-propagation-deployment quarkus-resteasy-client-oidc-token-propagation-deployment quarkus-oidc-token-propagation-reactive-deployment quarkus-rest-client-oidc-token-propagation-deployment quarkus-csrf-reactive-deployment quarkus-rest-csrf-deployment quarkus-spring-web-resteasy-classic-deployment quarkus-spring-web-resteasy-deployment quarkus-spring-web-resteasy-reactive-deployment quarkus-spring-web-rest-deployment 1.4.8.5. WebJar Locator extension renamed to Web-dependency-locator In Red Hat build of Quarkus 3.15, the quarkus-webjars-locator extension is renamed to quarkus-web-dependency-locator and is also enhanced to include mvnpm (Maven NPM) and importmaps . Update your code and configurations to use the new extension name. You must apply this change manually. It is not covered by the automated update described in the Migrating applications to Red Hat build of Quarkus 3.15 guide. For more information, see the Quarkus Web dependency locator guide. 1.5. Additional resources Release notes for Red Hat build of Quarkus version 3.15 | [
"quarkus -v 3.15.3",
"quarkus update",
"quarkus update --stream=3.15",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.3.SP1-redhat-00002:update",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.3.SP1-redhat-00002:update -Dstream=3.15",
"%test.quarkus.oidc.auth-server-url=USD{keycloak.url}/realms/quarkus/",
"%test.quarkus.oidc.auth-server-url=USD{keycloak.url:replaced-by-test-resource}/realms/quarkus/",
"@Inject RemoteCache<String, Book> booksCache; ... QueryFactory queryFactory = Search.getQueryFactory(booksCache); Query query = queryFactory.create(\"from book_sample.Book\"); List<Book> list = query.execute().list();",
"@Inject RemoteCache<String, Book> booksCache; ... Query<Book> query = booksCache.<Book>query(\"from book_sample.Book\"); List<Book> list = query.execute().list();",
"<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <annotationProcessorPaths> <path> <groupId>io.quarkus</groupId> <artifactId>quarkus-extension-processor</artifactId> <version>USD{quarkus.version}</version> </path> </annotationProcessorPaths> <compilerArgs> <arg>-AlegacyConfigRoot=true</arg> </compilerArgs> </configuration> </plugin>",
"<plugin> <artifactId>maven-compiler-plugin</artifactId> <executions> <execution> <id>default-compile</id> <configuration> <annotationProcessorPaths> <path> <groupId>io.quarkus</groupId> <artifactId>quarkus-extension-processor</artifactId> <version>USD{quarkus.version}</version> </path> </annotationProcessorPaths> <compilerArgs> <arg>-AlegacyConfigRoot=true</arg> </compilerArgs> </configuration> </execution> </executions> </plugin>",
"<build> <plugins> <!-- other plugins --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.13.0</version> <!-- Necessary for proper dependency management in annotationProcessorPaths --> <configuration> <annotationProcessorPaths> <path> <groupId>io.quarkus</groupId> <artifactId>quarkus-panache-common</artifactId> </path> </annotationProcessorPaths> </configuration> </plugin> <!-- other plugins --> </plugins> </build>",
"dependencies { annotationProcessor \"io.quarkus:quarkus-panache-common\" }",
"package org.acme; import org.eclipse.microprofile.reactive.messaging.Incoming; import org.eclipse.microprofile.reactive.messaging.Outgoing; @Incoming(\"source\") @Outgoing(\"sink\") public Result process(int payload) { return new Result(payload); }",
"package org.acme; import io.smallrye.common.annotation.NonBlocking; import org.eclipse.microprofile.reactive.messaging.Incoming; @Incoming(\"source\") @NonBlocking public void consume(int payload) { // called on I/O thread }",
"<properties> <junit-pioneer.version>2.2.0</junit-pioneer.version> </properties>",
"@Path(\"/records\") public class RecordsResource { @Inject HalService halService; @GET @Produces({ MediaType.APPLICATION_JSON, RestMediaType.APPLICATION_HAL_JSON }) @RestLink(rel = \"list\") public HalCollectionWrapper<Record> getAll() { List<Record> list = // HalCollectionWrapper<Record> halCollection = halService.toHalCollectionWrapper( list, \"collectionName\", Record.class); // return halCollection; } @GET @Produces({ MediaType.APPLICATION_JSON, RestMediaType.APPLICATION_HAL_JSON }) @Path(\"/{id}\") @RestLink(rel = \"self\") @InjectRestLinks(RestLinkType.INSTANCE) public HalEntityWrapper<Record> get(@PathParam(\"id\") int id) { Record entity = // HalEntityWrapper<Record> halEntity = halService.toHalWrapper(entity); // return halEntity; } }",
"package io.quarkus.resteasy.reactive.server.test.customproviders; import jakarta.ws.rs.NotFoundException; import jakarta.ws.rs.core.Response; import jakarta.ws.rs.ext.ExceptionMapper; import jakarta.ws.rs.ext.Provider; @Provider public class NotFoundExeptionMapper implements ExceptionMapper<NotFoundException> { @Override public Response toResponse(NotFoundException exception) { return Response.status(404).build(); } }"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/migrating_applications_to_red_hat_build_of_quarkus_3.15/assembly_migrating-to-quarkus-3_quarkus-migration |
Chapter 26. Probe schema reference | Chapter 26. Probe schema reference Used in: CruiseControlSpec , EntityTopicOperatorSpec , EntityUserOperatorSpec , KafkaBridgeSpec , KafkaClusterSpec , KafkaConnectSpec , KafkaExporterSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec , TlsSidecar , ZookeeperClusterSpec Property Description failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. integer initialDelaySeconds The initial delay before first the health is first checked. Default to 15 seconds. Minimum value is 0. integer periodSeconds How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. integer successThreshold Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1. integer timeoutSeconds The timeout for each attempted health check. Default to 5 seconds. Minimum value is 1. integer | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-Probe-reference |
7.104. kexec-tools | 7.104. kexec-tools 7.104.1. RHBA-2013:0281 - kexec-tools bug fix and enhancement update Updated kexec-tools packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The kexec fastboot mechanism allows booting a Linux kernel from the context of an already running kernel. The kexec-tools package provides the /sbin/kexec binary and ancillary utilities that form the user-space component of the kernel's kexec feature. Bug Fixes BZ#628610 When starting the kdump service, kdump always verifies the following vendor model attributes on the present block devices: "/sys/block/vda/device/model", "/sys/block/vda/device/rev" and "/sys/block/vda/device/type". However, the virtio block devices do not provide these attributes to sysfs so if such a device was tested, the following error messages were displayed: This update modifies the underlying code to restrain kdump from printing these error messages if a block device does not provide the aforementioned sysfs attributes. BZ# 770000 Previously, if memory ballooning was enabled in the first kernel, the virtio balloon driver was included in the kdump kernel, which led to extensive memory consumption. Consequently, kdump failed due to an out of memory (OOM) error and the vmcore file could not be saved. With this update, the virtio_balloon kernel module is no longer loaded in the second kernel so that an OOM failure no longer prevents kdump from capturing vmcore. BZ#788253 Previously, the microde.ko module was included and loaded in the kdump kernel, however, related firmware was not included in the kdump initrd. As a consequence, the kdump kernel waited for 60-second timeout to expire before loading the module. This update modifies kdump to exclude the microcode driver from the second kernel so that the kdump kernel no longer waits unnecessarily and loads kernel modules as expected. BZ# 813354 The kdump.conf(5) man page previously did not document what file system types are supported by kdump. The user could therefore attempt to specify an unsupported file-system-type option, such as "auto", in the kdump.conf file. This would result in a failure to start the kdump service while the user expected success. With this update, all supported file system types are clearly listed in the kdump.conf(5) man page. BZ#816467 When configuring kdump to dump a core file to a remote target over SSH without requiring a password, the "service kdump propagate" command has to be executed to generate and propagate SSH keys to the target system. This action required SELinux to be switched from enforcing mode to permissive mode and back. Previously, kdump init script used an incorrect test condition to determine SELinux mode so that SELinux mode could not be switched as required. Consequently, if SELinux was in enforcing mode, SSH keys could not be generated and kdump failed to start. This update removes the code used to switch between permissive and enforcing modes, which is no longer required because with Red Hat Enterprise Linux 6.3 SELinux added a policy allowing applications to access the ssh-keygen utility to generate SSH keys. SSH keys can now be generated and propagated as expected, and kdump no longer fails to start in this scenario. BZ#818645 When dumping a core file on IBM System z architecture using the line mode terminals, kdump displays its progress on these terminals. However, these terminals do not support cursor positioning so that formatting of the kdump output was incorrect and the output was hard to read. With this update, a new environment variable, TERM, has been introduced to correct this problem. If "TERM=dumb" is set, the makedumpfile utility produces an easily-readable output on the line mode terminals. BZ#820474 Previously, kdump expected that the generic ATA driver was always loaded as the ata_generic.ko kernel module and the mkdumprd utility thus added the module explicitly. However, the ata_generic.ko module does not exist on the IBM System z architecture and this assumption caused the kdump service to fail to start if the SCSI device was specified as a dump target on these machines. With this update, mkdumprd has been modified to load the ata_generic module only when required by the specific hardware. The kdump service now starts as expected on IBM System z architecture with SCSI device specified as a dump target. BZ#821376 Previously, kdump always called the hwclock command to set the correct time zone. However, the Real Time Clock (RTC) interface, which is required by hwclock, is not available on IBM System z architecture. Therefore, running kdump on these machines resulted in the following error messages being emitted: With this update, kdump has been modified to no longer call the hwclock command when running on IBM System z, and the aforementioned error messages no longer occur. BZ#825640 When dumping a core file to a remote target using SSH, kdump sends random seeds from the /dev/mem device to the /dev/random device to generate sufficient entropy required to establish successful SSH connection. However, when dumping a core file on the IBM System z with the CONFIG_STRICT_DEVMEM configuration option enabled, reading the /dev/mem was denied and the dump attempt failed with the following error: With this update, kdump has been modified to reuse the /etc/random_seed file instead of reading /dev/mem. Dumping no longer fails and the core file can now be successfully dumped to a remote target using SSH. BZ#842476 When booting to the kdump kernel and the local file system specified as the dump target was unmounted, the kernel module required for the respective file-system driver would not have to be included in dumprd. Consequently, kdump could not mount the dump device and failed to capture vmcore. With this update, mkdumprd has been modified to always install the required file system module when dumping a core file to the local file system. The vmcore file can be successfully captured in this scenario. BZ#859824 When dumping a core file to a remote target using a bonded interface and the target was connected by other than the bond0 interface, kdump failed to dump the core file. This happened because a bonding driver in the kdump kernel creates only one bonding interface named bond0 by default. This update modifies kdump to use the correct bonding interface in the kdump init script so that a core file can be dumped as expected in this scenario. BZ#870957 When dumping a core file to a SCSI device over Fibre Channel Protol (FCP) on IBM System z, the zFCP device has to be configured and set online before adding WWPN and SCSI LUN to the system. Previously, the mkdumprd utility parsed the zfcp.conf file incorrectly so that the zFCP device could not be set up and the kdump kernel became unresponsive during the boot. Consequently, kdump failed to dump a core file to the target SCSI device. With this update, mkdumprd has been modified to parse the zfcp.conf file correctly and kdump can now successfully dump a core file to the SCSI target on IBM System z. Also, mkdumprd previously always tried to set online Direct Access Storage Devices (DASD) on IBM System z. This resulted in the "hush: can't open '/sys/bus/ccw/devices//online': No such file or directory" error messages to be emitted when booting the kdump kernel in a SCSI-only environment. This update modifies mkdumprd to skip entries from the dasd.conf file if the Linux on IMB System z runs without DASD devices. The aforementioned error messages no longer occur during the kdump kernel boot in the SCSI-only environment on IBM System z. BZ#872086 Previously, the kexec utility incorrectly recognized the Xen DomU (HVM) guest as the Xen Dom0 management domain. Consequently, the kernel terminated unexpectedly and the kdump utility generated the vmcore dump file with no NT_PRSTATUS notes. The crash also led to a NULL pointer dereference. With this update, kexec collects positions and sizes of NT_PRSTATUS from /sys/devices/system/cpu/cpuN/crash_notes on Xen DomU and from /proc/iomem on Xen Dom0. As a result, the crashes no longer occur. BZ#874832 Due to recent changes, LVM assumes that the udev utility is always present on the system and creates correct device nodes and links. However, the kdump initramfs image does not contain udev so that LVM was unable to create disk devices and kdump failed. With this update, the mkdumprd utility modifies the lvm.conf configuration file to inform LVM that initramfs does not contain functional udev. If the lvm.conf file does not exist, mkdumprd creates it. The LVM now creates the devices correctly and kdump works as expected. BZ#876891 Previously, the mlx4_core kernel module was loaded in the kdump kernel on systems using Mellanox ConnectX InfiniBand adapter cards. However, the mlx4_core module requires an extensive amount of memory, which caused these systems to run into an OOM situation and kdump failed. With this update, the second kernel no longer loads the mlx4_core module so that the OOM situation no longer occurs and kdump captures the vmcore file successfully in this scenario. BZ#880040 Due to recent changes, the libdevmapper library assumes that the udev utility is always present on the system and creates correct device nodes for mulitpath devices. However, the kdump initramfs image does not contain udev therefore LVM was unable to create disk devices and kdump failed. With this update, the mkdumprd utility sets the DM_DISABLE_UDEV environment variable to 1 to inform libdevmapper that the initramfs image does not contain functional udev. The LVM now creates the devices correctly and kdump can successfully dump a core file to a multipath device. BZ# 892703 When setting up a network in the kdump kernel, the mkdumprd code incorrectly renamed network bridges along with NIC names in the network configuration files. This caused the kdump network setup to fail and the vmcore file could not be captured on the remote target. This update modifies kdump to substitute names of network devices correctly so that the network can be set up and vmcore dumped on the remote target as expected. Enhancements BZ#822146 With this update, the mkdumprd utility has been modified to support multipath storage devices as dump targets, which includes the ability to activate multiple NICs in the second kernel. BZ#850623 This update modifies kdump to always extract the dmesg output from the captured vmcore dump file, and save the output in a separate text file before dumping the core file. BZ#878200 The /usr/share/doc/kexec-tools-2.0.0/kexec-kdump-howto.txt file has been modified to provide a comprehensive list of supported, unsupported, and unknown dump targets under the "Dump Target support status" section. Users of kexec-tools are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | [
"cat: /sys/block/vda/device/model: No such file or directory cat: /sys/block/vda/device/type: No such file or directory",
"hwclock: can't open '/dev/misc/rtc': No such file or directory",
"dd: /dev/mem: Operation not permitted"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/kexec-tools |
Chapter 6. Bug fixes | Chapter 6. Bug fixes This section describes bugs with significant impact on users that were fixed in this release of Red Hat Ceph Storage. In addition, the section includes descriptions of fixed known issues found in versions. 6.1. The Ceph Ansible utility The size of the replication pool can now be modified after the Ceph cluster deployment Previously, increasing the size of the replication pool failed after the Ceph cluster was deployed using director. This occurred because an issue with the task in charge of customizing the pool size prevented it from executing when the playbook was rerun. With this update, you can now modify pool size after cluster deployment. (BZ#1743242) Ceph Ansible supports multiple grafana instances during a Ceph dashboard deployment Previously, in a multi-node environment, ceph-ansible was not able to configure multiple grafana instances as only one node was supported, leaving the remaining nodes unconfigured. With this update, ceph-ansible supports multiple instances and injects Ceph-specific layouts on all the Ceph Monitor nodes during the deployment of the Ceph Dashboard. ( BZ#1784011 ) Running the Ansible purge-cluster.yml playbook no longer fails when the dashboard feature is disabled Previously, using the purge-cluster-yml playbook to purge clusters failed when the dashboard feature was disabled with the following error message: This occurred because the dashboard_enabled variable was ignored. With this update, the dashboard_enabled variable is correctly handled, and purge-cluster.yml runs successfully. ( BZ#1785736 ) Red Hat Ceph Storage installation on Red Hat OpenStack Platform no longer fails Previously, the ceph-ansible utility became unresponsive when attempting to install Red Hat Ceph Storage with the Red Hat OpenStack Platform 16, and it returns an error similar to the following: This occurred because ceph-ansible reads the value of the fact container_exec_cmd from the wrong node in handler_osds.yml With this update, ceph-ansible reads the value of container_exec_cmd from the correct node, and the installation proceeds successfully. ( BZ#1792320 ) Ansible unsets the norebalance flag after it completes Previously, Ansible did not unset the norebalance flag and it had to be unset manually. With this update, the rolling-update.yml Ansible playbook unsets the norebalance flag automatically after it completes and a manual unset is not required. ( BZ#1793564 ) Ansible upgrades a multisite Ceph Object Gateway when the Dashboard is enabled Previously, when Red Hat Ceph Storage Dashboard is enabled and an attempt to use Ansible to upgrade to a later version of Red Hat Ceph Storage is made, the upgrade to the secondary Ceph Object Gateway site in a multisite setup failed. With this update to Red Hat Ceph Storage, upgrade of the secondary site works as expected. ( BZ#1794351 ) Ceph Ansible works with Ansible 2.9 Previously, ceph-ansible versions 4.0 and above did not work with Ansible version 2.9. This occurred because the ceph-validate role did not allow ceph-ansible to be run against Ansible 2.9. With this update, ceph-ansible works with Ansible 2.9. ( BZ#1807085 ) Ceph installations with custom software repositories no longer fail Previously, using custom repositories to install Ceph were not allowed. This occurred because the redhat_custom_repository.yml file was removed. With this update, the redhat_custom_repository.yml file is included, and custom repositories can be used to install Red Hat Ceph Storage. Note Only Red Hat-signed packages can use custom software repositories to install Ceph. Custom third-party software repositories are not supported. (BZ#1808362) The ceph-ansible purge playbook does not fail if dashboard was not installed Previously, when the dashboard was not deployed, the purge playbook failed when purging the cluster because it tried to remove dashboard related resources that did not exist. Consequently, the purge playbook stated that the dashboard is deployed, and purge failed. With this update, ceph-ansible does not purge dashboard related resources if not part of the deployment, and purge completes successfully. (BZ#1808988) Using a standalone nfs-ganesha daemon with an external Ceph storage cluster no longer fails to copy the keyring during deployment Previously, in configurations consisting of a standalone nfs-ganesha daemon and an external Ceph storage cluster, the Ceph keyring was not copied to /etc/ceph during a Ceph Ansible deployment. With this update, the Ceph keyring is copied to /etc/ceph/ directory. ( BZ#1814942 ) Ceph Ansible updates the privileges of the dashboard admin user after initial install Previously, ceph-ansible could only set the privileges of the dashboard user when it was first created. Running the playbooks after changing dashboard_admin_user_ro: false from its original setting during install would not update the privileges of the user. In Red Hat Ceph Storage 4.1z1 ceph-ansible has been updated to support changing the dashboard user privileges on successive runs of the playbooks. ( BZ#1826002 ) The docker-to-podman.yml playbook now migrates dashboard containers Previously, running the docker-to-podman.yml playbook migrated all the daemons from docker to podman , except for grafana-server and the dashboard containers. With this release, running docker-to-podman.yml successfully migrates all of the daemons. ( BZ#1829389 ) Storage directories from old containers are removed Previously, storage directories for old containers were not removed. This could cause high disk usage. This could be seen if you installed Red Hat Ceph Storage, purged it, and reinstalled it. In Red Hat Ceph Storage 4.1z1, storage directories for containers that are no longer being used are removed and excessive disk usage does not occur. ( BZ#1834974 ) Upgrading a containerized cluster from 4.0 to 4.1 on Red Hat Enterprise Linux 8.1 no longer fails Previously, when upgrading a Red Hat Ceph Storage cluster from 4.0 to 4.1 the upgrade could fail with an error on set_fact ceph_osd_image_repodigest_before_pulling . Due to an issue with how the container image tag was updated, ceph-ansible could fail. In Red Hat Ceph Storage 4.1z1 ceph-ansible has been updated so it no longer fails and upgrading works as expected. ( BZ#1844496 ) Enabling the Ceph Dashboard fails on an existing OpenStack environment In an existing OpenStack environment, when configuring the Ceph Dashboard's IP address and port after the Ceph Manager dashboard module was enabled, was causing a conflict with the HAProxy configuration. To avoid this conflict, configure the Ceph Dashboard's IP address and port before enabling the Ceph Manager dashboard module. ( BZ#1851455 ) Red Hat Ceph Storage Dashboard fails when deploying a Ceph Object Gateway secondary site Previously, the Red Hat Ceph Storage Dashboard would fail to deploy the secondary site in a Ceph Object Gateway multi-site deployment, because when Ceph Ansible ran the radosgw-admin user create command, the command would return an error. With this release, the Ceph Ansible task in the deployment process has been split into two different tasks. Doing this allows the Red Hat Ceph Storage Dashboard to deploy a Ceph Object Gateway secondary site successfully. ( BZ#1851764 ) The Ceph File System Metadata Server installation fails when running a playbook with the --limit option Some facts were not getting set on the first Ceph Monitor, but those facts were getting set on all respective Ceph Monitor nodes. When running a playbook with the --limit option, these facts were not set on the Ceph Monitor, if the Ceph Monitor was not part of the batch. This would cause the playbook to fail when these facts where used in a task for the Ceph Monitor. With this release, these facts are set on the Ceph Monitor whether the playbook uses the --limit option or not. ( BZ#1852796 ) Adding a new Ceph Ojbect Gateway instance when upgrading fails The radosgw_frontend_port option did not consider more than one Ceph Object Gateway instance, and configured port 8080 to all instances. With this release, the radosgw_frontend_port option is increased for each Ceph Object Gateway instance, allowing you to use more than one Ceph Object Gateway instance. ( BZ#1859872 ) Ceph Ansible's shrink-osd.yml playbook fails when using FileStore in a containerized environment A default value was missing in Ceph Ansible's shrink-osd.yml playbook, which was causing a failure when shrinking a FileStore-backed Ceph OSD in a containerized environment. A previously prepared Ceph OSD using ceph-disk and dmcrypt , was leaving the encrypted key undefined in the corresponding Ceph OSD file. With this release, a default value was added so the Ceph Ansible shrink-osd.yml playbook can ran on Ceph OSD that have been prepared using dmcrypt in containerized environments. ( BZ#1862416 ) Using HTTPS breaks access to Prometheus and the alert manager Setting the dashboard_protocol option to https was causing the Red Hat Ceph Storage Dashboard to try and access the Prometheus API, which does not support TLS natively. With this release, Prometheus and the alert manager are force to use the HTTP protocol, when setting the dashboard_protocol option to https . ( BZ#1866006 ) The Ceph Ansible shrink-osd.yml playbook does not clean the Ceph OSD properly The zap action done by the ceph_volume module does not handle the osd_fsid parameter. This caused the Ceph OSD to be improperly zapped by leaving logical volumes on the underlying devices. With this release, the zap action properly handles the osd_fsid parameter, and the Ceph OSD can be cleaned properly after shrinking. ( BZ#1873010 ) The Red Hat Ceph Storage rolling update fails when multiple storage clusters exist Running the Ceph Ansible rolling_update.yml playbook when multiple storage clusters are configured, would cause the rolling update to fail because a storage cluster name could not be specified. With this release, the rolling_update.yml playbook uses the --cluster option to allow for a specific storage cluster name. ( BZ#1876447 ) The hosts field has an invalid value when doing a rolling update A Red Hat Ceph Storage rolling update fails because the syntax changed in the evaluation of the hosts value in the Ceph Ansible rolling_update.yml playbook. With this release, a fix to the code updates the syntax properly when the hosts field is specified in the playbook. ( BZ#1876803 ) Running the rolling_update.yml playbook does not retrieve the storage cluster fsid When running the rolling_update.yml playbook, and the Ceph Ansible inventory does not have Ceph Monitor nodes defined, for example, in an external scenario, the storage cluster fsid is not retrieved. This causes the rolling_update.yml playbook to fail. With this release, the fsid retrieval is skipped when there are no Ceph Monitors defined in the inventory, allowing the rolling_update.yml playbook to execute when no Ceph Monitors are present. ( BZ#1877426 ) 6.2. The Cockpit Ceph installer Cockpit Ceph Installer no longer deploys Civetweb instead of Beast for RADOS Gateway Previously, the Cockpit Ceph Installer configured RADOS Gateway (RGW) to use the deprecated Civetweb frontend instead of the currently supported Beast front end. With this update to Red Hat Ceph Storage, the Cockpit Ceph Installer deploys the Beast frontend with RGW as expected. ( BZ#1806791 ) The ansible-runner-service.sh script no longer fails due to a missing repository Previously, the Cockpit Ceph Installer startup script could fail due to a missing repository in /etc/containers/registries.conf . The missing repository was registry.redhat.io . In Red Hat Ceph Storage 4.1z1, the ansible-runner-service.sh script has been updated to explicitly state the registry name so the repository does not have to be included in /etc/containers/registries.conf . ( BZ#1809003 ) Cockpit Ceph Installer no longer fails on physical network devices with bridges Previously, the Cockpit Ceph Installer failed if physical network devices were used in a Linux software bridge. This was due to a logic error in the code. In Red Hat Ceph Storage 4.1z1, the code has been fixed and you can use Cockpit Ceph Installer to deploy on nodes with bridges on the physical network interfaces. ( BZ#1816478 ) Cluster installation no longer fails due to cockpit-ceph-installer not setting admin passwords for dashboard and grafana Previously, cockpit-ceph-installer did not allow you to set the admin passwords for dashboard and Grafana. This caused storage cluster configuration to fail because ceph-ansible requires the default passwords to be changed. With this update, cockpit-ceph-installer allows you to set the admin passwords in Cockpit so the storage cluster configuration can complete successfully. ( BZ#1839149 ) Cockpit Ceph Installer allows RPM installation type on Red Hat Enterprise Linux 8 Previously, on Red Hat Enterprise Linux 8, the Cockpit Ceph Installer would not allow you to select RPM for Installation type, you could only install containerized. In Red Hat Ceph Storage 4.1z1, you can select RPM to install Ceph on bare-metal. ( BZ#1850814 ) 6.3. Ceph File System Improved Ceph File System performance as the number of subvolume snapshots increases Previously, creating more than 400 subvolume snapshots was degrading the Ceph File System performance by slowing down file system operations. With this release, you can configure subvolumes to only support subvolume snapshots at the subvolume root directory, and you can prevent cross-subvolume links and renames. Doing this allows for the creation of higher numbers of subvolume snapshots, and does not degrade the Ceph File System performance. ( BZ#1848503 ) Big-endian systems failed to decode metadata for Ceph MDS Previously, decoding the Ceph MDS metadata on big-endian systems would fail. This was caused by Ceph MDS ignoring the endianness when decode structures from RADOS. The Ceph MDS metadata routines were fixed to correct this issue, resulting in Ceph MDS decoding the structure correctly. ( BZ#1896555 ) 6.4. Ceph Manager plugins Ceph Manager crashes when setting the alerts interval There was a code bug in the alerts module for Ceph Manager, which was causing the Ceph Manager to crash. With this release, this code bug was fixed, and you can set the alerts interval without the Ceph Manager crashing. ( BZ#1849894 ) 6.5. The Ceph Volume utility The ceph-volume lvm batch command fails with mixed device types The ceph-volume command did not return the expected return code when devices are filtered using the lvm batch sub-command, and when the Ceph OSD strategy changed. This was causing ceph-ansible tasks to fail. With this release, the ceph-volume command returns the correct status code when the Ceph OSD strategy changes, allowing ceph-ansible to properly check if new Ceph OSDs can be added or not. ( BZ#1825113 ) The ceph-volume command is treating a logical volume as a raw device The ceph-volume command was treating a logical volume as a raw device, which was causing the add-osds.yml playbook to fail. This was not allowing additional Ceph OSD to be added to the storage cluster. With this release, a code bug was fixed in ceph-volume so it handles logical volumes properly, and the add-osds.yml playbook can be used to add Ceph OSDs to the storage cluster. ( BZ#1850955 ) 6.6. Containers The nfs-ganesha daemon starts normally Previously, a configuration using nfs-ganesha with the RADOS backend would not start because the nfs-ganesha-rados-urls library was missing. This occurred because the nfs-ganesha library package for the RADOS backend was moved to a dedicated package. With this update, the nfs-ganesha-rados-urls package is added to the Ceph container image, so the nfs-ganesha daemon starts successfully. ( BZ#1797075 ) 6.7. Ceph Object Gateway Ceph Object Gateway properly applies AWS request signing Previously, the Ceph Object Gateway did not properly apply an AWS request for signing headers, and was generating the following error message: With this release, the Ceph Object Gateway code was fixed to properly sign headers. This results in the signing request to succeed when requested. ( BZ#1665683 ) The radosgw-admin bucket check command no longer displays incomplete multipart uploads Previously, running the radosgw-admin bucket check command displayed incomplete multipart uploads. This could cause confusion for a site admin because the output might have appeared as though the bucket index were damaged. With this update, the command displays only errors and orphaned objects, and the incomplete uploads are filtered out. ( BZ#1687971 ) Uneven distribution of omap keys with bucket shard objects In versioned buckets, occasionally delete object operations were unable to fully complete. In this state, the bucket index entries for these objects had their name and instance strings zeroed out. When there was a subsequent reshard, the empty name and instance strings caused the entry to be resharded to shard 0. Entries that did not belong on shard 0 ended up there. This put a disproportionate number of entries on shard 0 and was larger than each other shard. With this release, the name and instance strings are no longer cleared during this portion of the delete operation. If a reshard takes place, the entries that were not fully deleted nonetheless end up on the correct shard and are not forced to shard 0. ( BZ#1749090 ) Increase in overall throughput of Object Gateway lifecycle processing performance Previously, Object Gateway lifecycle processing performance was constrained by the lack of parallelism due to the increasing workload of objects or buckets with many buckets or containers in the given environment. With this update, parallelism is in two dimensions, a single object gateway instance can have several lifecycle processing threads, and each thread has multiple work-pool threads executing the lifecycle work. Additionally, this update improved the allocation of shards to workers, thereby increasing overall throughput. ( BZ#1794715 ) Bucket tenanting status is interpreted correctly when rgw_parse_bucket_key is called Previously, some callers of rgw_parse_bucket_key like radosgw-admin bucket stats , which processed keys in a loop, could incorrectly interpret untenanted buckets as tenanted, if some tenanted buckets were listed. If rgw_parse_bucket_key was called with a non-empty rgw bucket argument, it would not correctly assign an empty value for bucket::tenant when no tenant was present in the key. In Red Hat Ceph Storage 4.1z1 the bucket tenant member is now cleared if no tenant applies and bucket tenanting status is interpreted correctly. ( BZ#1830330 ) The Ceph Object Gateway tries to cache and access anonymous user information Previously, the Ceph Object Gateway tried to fetch anonymous user information for each request that has not been authenticated. This unauthenticated access was causing high load on a single Ceph OSD in the storage cluster. With this release, the Ceph Object Gateway will try not to fetch anonymous user information, resulting in a decrease in latency and load on a single Ceph OSD. ( BZ#1831865 ) Lifecycle expiration is reported correctly for objects Previously, incorrect lifecycle expiration could be reported for some objects, due to the presence of a prefix rule. This was caused because the optional prefix restriction in lifecycle expiration rules was ignored when generating expiration headers used in S3 HEAD and GET requests. In Red Hat Ceph Storage 4.1z1, the rule prefix is now part of the expiration header rule matching and lifecycle expiration for objects is reported correctly. ( BZ#1833309 ) A high number of objects in the rgw.none bucket stats The code that calculates stats failed to check, in some cases, whether a bucket index entry referenced an object that already existed. This was causing the bucket stats to be incorrect. With this release, code was added to check for existence, fixing the bucket stats. ( BZ#1846035 ) A call to an ordered bucket listing gets stuck A code bug in the bucket ordered list operation could cause, under specific circumstances, this operation to get stuck in a loop and never complete. With this release, this code bug was fixed, and as a result the call to an ordered bucket listing completes as expected. ( BZ#1853052 ) Life-cycle processing ignores NoncurrentDays in NoncurrentVersionExpiration A variable which is suppose to contain the modification time of objects during parallel life-cycle processing was incorrectly initialized. This caused non-current versions of objects in buckets with a non-current expiration rule to expire before their intended expiration time. With this release, the modificaction time ( mtime ) is correctly initialized and propagates to the life-cycle's processing queue. This results in the non-current expiration to happen after the correct time period. ( BZ#1875305 ) Parts of some objects were erroneously added to garbage collection When reading objects using the Ceph Object Gateway, if parts of those objects took more than half of the value, as defined by the rgw_gc_obj_min_wait option, then their tail object was added to the garbage collection list. Those tail objects in the garbage collection list were deleted, resulting in data loss. With this release, the garbage collection feature meant to delay garbage collection for deleted objects was disabled. As a result, reading objects using the Ceph Object Gateway that are taking a long time are not added to the garbage collection list. ( BZ#1892644 ) 6.8. Multi-site Ceph Object Gateway The RGW daemon no longer crashes on shutdown Previously, the RGW process would abort in certain circumstances due to a race condition during radosgw shutdown. One situation this was issue was seen was when deleting objects when using multisite. This was caused by dereferencing unsafe memory. In Red Hat Ceph Storage 4.1z1 unsafe memory is no longer dereferenced and the RGW daemon no longer crashes. ( BZ#1840858 ) 6.9. RADOS A health warning status is reported when no Ceph Managers or OSDs are in the storage cluster In Red Hat Ceph Storage releases, the storage cluster health status was HEALTH_OK even though there were no Ceph Managers or OSDs in the storage cluster. With this release, this health status has changed, and will report a health warning if a storage cluster is not set up with Ceph Managers, or if all the Ceph Managers go down. Because Red Hat Ceph Storage heavily relies on the Ceph Manager to deliver key features, it is not advisable to run a Ceph storage cluster without Ceph Managers or OSDs. ( BZ#1761474 ) The ceph config show command displays the correct fsid Previously, the ceph config show command only displayed the configuration keys present in the Ceph Monitor's database, and because the fsid is a NO_MON_UPDATE configuration value, the fsid was not displaying correctly. With this release, the ceph config show command displays the correct fsid value. ( BZ#1772310 ) Small objects and files in RADOS no longer use more space than required The Ceph Object Gateway and the Ceph file system (CephFS) stores small objects and files as individual objects in RADOS. Previously, objects smaller than BlueStore's default minimum allocation size ( min_alloc_size ) of 16 KB used more space than required. This happened because the earlier default value of BlueStore's min_alloc_size was 16 KB for solid state devices (SSDs). Currently, the default value of min_alloc_size for SSDs is 4 KB. This enables better use of space with no impact on performance. ( BZ#1788347 ) Slow ops not being logged in cluster logs Previously, slow ops were not being logged in cluster logs. They were logged in the osd or mon logs, but lacked the expected level of detail. With this release, slow ops are being logged in cluster logs, at a level of detail that makes the logs useful for debugging. ( BZ#1807184 ) Backfills are no longer delayed during placement group merging Previously, in Red Hat Ceph Storage placement group merges could take longer than expected if the acting set for the source and target placement groups did not match before merging. Backfills done when there is a mismatch can appear to stall. In Red Hat Ceph Storage 4.1z1 the code has been updated to only merge placement groups whose acting sets match. This change allows merges to complete without delay. ( BZ#1810949 ) Ceph Monitors can grow beyond the memory target Auto-tuning the memory target was only done on the Ceph Monitor leader and not the Ceph Monitors following the leader. This was causing the Ceph Monitor followers to exceed the set memory target, resulting in the Ceph Monitors crashing once its memory was exhausted. With this release, the auto-tuning process applies the memory target for the Ceph Monitor leader and its followers so memory is not exhausted on the system. ( BZ#1827856 ) Disk space usage does not increase when OSDs are down for a long time Previously, when an OSD was down for a long time, a large number of osdmaps were stored and not trimmed. This led to excessive disk usage. In Red Hat Ceph Storage 4.1z1, osdmaps are trimmed regardless of whether or not there are down OSDs and disk space is not overused. ( BZ#1829646 ) Health metrics are correctly reported when smartctl exits with a non-zero error code Previously, the ceph device get-health-metrics command could fail to report metrics if smartctl exited with a non-zero error code even though running smartctl directly reported the correct information. In this case a JSON error was reported instead. In Red Hat Ceph Storage 4.1z1, the ceph device get-health-metrics command reports metrics even if smartctl exits with a non-zero error code as long as smartctl itself reports correct information. ( BZ#1837645 ) Crashing Ceph Monitors caused by a negative time span Previously, Ceph Monitors could crash when triggered by a monotonic clock going back in time. These crashes caused a negative monotonic time span and triggered an assertion into the Ceph Monitor leading them to crash. The Ceph Monitor code was updated to tolerate this assertion and interprets it as a zero-length interval and not a negative value. As a result, the Ceph Monitor does not crash when this assertion is made. ( BZ#1847685 ) Improvements to the encoding and decoding of messages on storage clusters When deploying a Red Hat Ceph Storage cluster containing a heterogeneous architecture, such as x86_64 and s390, could cause system crashes. Also, under certain workloads for CephFS, Ceph Monitors on s390x nodes could crash unexpectedly. With this release, properly decoding entity_addrvec_t with a marker of 1 , properly decoding the enum types on big-endian systems by using an intermediate integer variable type, and fixed encoding and decoding float types on big-endian systems. As a result, heterogeneous storage clusters, and Ceph Monitors on s390x nodes no longer crash. ( BZ#1895040 ) 6.10. RADOS Block Devices (RBD) Multiple rbd unmap commands can be issued concurrently and the corresponding RBD block devices are unmapped successfully Previously, issuing concurrent rbd unmap commands could result in udev-related event race conditions. The commands would sporadically fail, and the corresponding RBD block devices might remain mapped to their node. With this update, the udev-related event race conditions have been fixed, and the commands no longer fail. ( BZ#1784895 ) | [
"registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.1 msg: '[Errno 2] No such file or directory'",
"'Error: unable to exec into ceph-mon-dcn1-computehci1-2: no container with name or ID ceph-mon-dcn1-computehci1-2 found: no such container'",
"SignatureDoesNotMatch"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/release_notes/bug-fixes |
Chapter 4. Adding Servers to the Trusted Storage Pool | Chapter 4. Adding Servers to the Trusted Storage Pool A storage pool is a network of storage servers. When the first server starts, the storage pool consists of that server alone. Adding additional storage servers to the storage pool is achieved using the probe command from a running, trusted storage server. Important Before adding servers to the trusted storage pool, you must ensure that the ports specified in Chapter 3, Considerations for Red Hat Gluster Storage are open. On Red Hat Enterprise Linux 7, enable the glusterFS firewall service in the active zones for runtime and permanent mode using the following commands: To get a list of active zones, run the following command: To allow the firewall service in the active zones, run the following commands: For more information about using firewalls, see section Using Firewalls in the Red Hat Enterprise Linux 7 Security Guide : https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html . Note When any two gluster commands are executed concurrently on the same volume, the following error is displayed: Another transaction is in progress. This behavior in the Red Hat Gluster Storage prevents two or more commands from simultaneously modifying a volume configuration, potentially resulting in an inconsistent state. Such an implementation is common in environments with monitoring frameworks such as the Red Hat Gluster Storage Console, and Red Hat Enterprise Virtualization Manager. For example, in a four node Red Hat Gluster Storage Trusted Storage Pool, this message is observed when gluster volume status VOLNAME command is executed from two of the nodes simultaneously. 4.1. Adding Servers to the Trusted Storage Pool The gluster peer probe [server] command is used to add servers to the trusted server pool. Note Probing a node from lower version to a higher version of Red Hat Gluster Storage node is not supported. Adding Three Servers to a Trusted Storage Pool Create a trusted storage pool consisting of three storage servers, which comprise a volume. Prerequisites The glusterd service must be running on all storage servers requiring addition to the trusted storage pool. See Chapter 22, Starting and Stopping the glusterd service for service start and stop commands. Server1 , the trusted storage server, is started. The host names of the target servers must be resolvable by DNS. Run gluster peer probe [server] from Server 1 to add additional servers to the trusted storage pool. Note Self-probing Server1 will result in an error because it is part of the trusted storage pool by default. All the servers in the Trusted Storage Pool must have RDMA devices if either RDMA or RDMA,TCP volumes are created in the storage pool. The peer probe must be performed using IP/hostname assigned to the RDMA device. Verify the peer status from all servers using the following command: Important If the existing trusted storage pool has a geo-replication session, then after adding the new server to the trusted storage pool, perform the steps listed at Section 10.6, "Starting Geo-replication on a Newly Added Brick, Node, or Volume" . Note Verify that time is synchronized on all Gluster nodes by using the following command: | [
"firewall-cmd --get-active-zones",
"firewall-cmd --zone= zone_name --add-service=glusterfs firewall-cmd --zone= zone_name --add-service=glusterfs --permanent",
"gluster peer probe server2 Probe successful gluster peer probe server3 Probe successful gluster peer probe server4 Probe successful",
"gluster peer status Number of Peers: 3 Hostname: server2 Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5 State: Peer in Cluster (Connected) Hostname: server3 Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7 State: Peer in Cluster (Connected) Hostname: server4 Uuid: 3e0caba-9df7-4f66-8e5d-cbc348f29ff7 State: Peer in Cluster (Connected)",
"for peer in `gluster peer status | grep Hostname | awk -F':' '{print USD2}' | awk '{print USD1}'`; do clockdiff USDpeer; done"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-trusted_storage_pools |
2.3. Starting the Piranha Configuration Tool Service | 2.3. Starting the Piranha Configuration Tool Service After you have set the password for the Piranha Configuration Tool , start or restart the piranha-gui service located in /etc/rc.d/init.d/piranha-gui . To do this, type the following command as root: /sbin/service piranha-gui start or /sbin/service piranha-gui restart Issuing this command starts a private session of the Apache HTTP Server by calling the symbolic link /usr/sbin/piranha_gui -> /usr/sbin/httpd . For security reasons, the piranha-gui version of httpd runs as the piranha user in a separate process. The fact that piranha-gui leverages the httpd service means that: The Apache HTTP Server must be installed on the system. Stopping or restarting the Apache HTTP Server via the service command stops the piranha-gui service. Warning If the command /sbin/service httpd stop or /sbin/service httpd restart is issued on an LVS router, you must start the piranha-gui service by issuing the following command: /sbin/service piranha-gui start The piranha-gui service is all that is necessary to begin configuring LVS. However, if you are configuring LVS remotely, the sshd service is also required. You do not need to start the pulse service until configuration using the Piranha Configuration Tool is complete. See Section 4.8, "Starting LVS" for information on starting the pulse service. 2.3.1. Configuring the Piranha Configuration Tool Web Server Port The Piranha Configuration Tool runs on port 3636 by default. To change this port number, change the line Listen 3636 in Section 2 of the piranha-gui Web server configuration file /etc/sysconfig/ha/conf/httpd.conf . To use the Piranha Configuration Tool you need at minimum a text-only Web browser. If you start a Web browser on the primary LVS router, open the location http:// localhost :3636 . You can reach the Piranha Configuration Tool from anywhere via Web browser by replacing localhost with the hostname or IP address of the primary LVS router. When your browser connects to the Piranha Configuration Tool , you must login to access the configuration services. Enter piranha in the Username field and the password set with piranha-passwd in the Password field. Now that the Piranha Configuration Tool is running, you may wish to consider limiting who has access to the tool over the network. The section reviews ways to accomplish this task. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-lvs-piranha-service-VSA |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_jboss_eap/making-open-source-more-inclusive |
Chapter 1. Installation methods | Chapter 1. Installation methods You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program, by using the following method: Installing a cluster : You can install OpenShift Container Platform on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program. 1.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that you provision, by using the following method: Installing a cluster on Azure Stack Hub using ARM templates : You can install OpenShift Container Platform on Azure Stack Hub by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation. 1.3. Additional resources Configuring an Azure Stack Hub account | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_azure_stack_hub/preparing-to-install-on-azure-stack-hub |
Chapter 2. Managing compute machines with the Machine API | Chapter 2. Managing compute machines with the Machine API 2.1. Creating a compute machine set on AWS You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Amazon Web Services (AWS). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.1.1. Sample YAML for a compute machine set custom resource on AWS The sample YAML defines a compute machine set that runs in the us-east-1a Amazon Web Services (AWS) Local Zone and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 Specify the infrastructure ID, role node label, and zone. 3 Specify the role node label to add. 4 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-<role>-<zone> 5 Specify the zone name, for example, us-east-1a . 6 Specify the region, for example, us-east-1 . 7 Specify the infrastructure ID and zone. 8 Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:[email protected] . Note Custom tags can also be specified during installation in the install-config.yml file. If the install-config.yml file and the machine set include a tag with the same name data, the value for the tag from the machine set takes priority over the value for the tag in the install-config.yml file. 2.1.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml If you need compute machine sets in other availability zones, repeat this process to create more compute machine sets. Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.1.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.1.4. Assigning machines to placement groups for Elastic Fabric Adapter instances by using machine sets You can configure a machine set to deploy machines on Elastic Fabric Adapter (EFA) instances within an existing AWS placement group. EFA instances do not require placement groups, and you can use placement groups for purposes other than configuring an EFA. This example uses both to demonstrate a configuration that can improve network performance for machines within the specified placement group. Prerequisites You created a placement group in the AWS console. Note Ensure that the rules and limitations for the type of placement group that you create are compatible with your intended use case. Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following lines under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5 placementGroupPartition: <placement_group_partition_number> 6 # ... 1 Specify an instance type that supports EFAs . 2 Specify the EFA network interface type. 3 Specify the zone, for example, us-east-1a . 4 Specify the region, for example, us-east-1 . 5 Specify the name of the existing AWS placement group to deploy machines in. 6 Optional: Specify the partition number of the existing AWS placement group to deploy machines in. Verification In the AWS console, find a machine that the machine set created and verify the following in the machine properties: The placement group field has the value that you specified for the placementGroupName parameter in the machine set. The partition number field has the value that you specified for the placementGroupPartition parameter in the machine set. The interface type field indicates that it uses an EFA. 2.1.5. Machine set options for the Amazon EC2 Instance Metadata Service You can use machine sets to create machines that use a specific version of the Amazon EC2 Instance Metadata Service (IMDS). Machine sets can create machines that allow the use of both IMDSv1 and IMDSv2 or machines that require the use of IMDSv2. Note Using IMDSv2 is only supported on AWS clusters that were created with OpenShift Container Platform version 4.7 or later. To deploy new compute machines with your preferred IMDS configuration, create a compute machine set YAML file with the appropriate values. You can also edit an existing machine set to create new machines with your preferred IMDS configuration when the machine set is scaled up. Important Before configuring a machine set to create machines that require IMDSv2, ensure that any workloads that interact with the AWS metadata service support IMDSv2. 2.1.5.1. Configuring IMDS by using machine sets You can specify whether to require the use of IMDSv2 by adding or editing the value of metadataServiceOptions.authentication in the machine set YAML file for your machines. Prerequisites To use IMDSv2, your AWS cluster must have been created with OpenShift Container Platform version 4.7 or later. Procedure Add or edit the following lines under the providerSpec field: providerSpec: value: metadataServiceOptions: authentication: Required 1 1 To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. 2.1.6. Machine sets that deploy machines as Dedicated Instances You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account. Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware. 2.1.6.1. Creating Dedicated Instances by using machine sets You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy field in your machine set YAML file to launch a Dedicated Instance on AWS. Procedure Specify a dedicated tenancy under the providerSpec field: providerSpec: placement: tenancy: dedicated 2.1.7. Machine sets that deploy machines as Spot Instances You can save on costs by creating a compute machine set running on AWS that deploys machines as non-guaranteed Spot Instances. Spot Instances utilize unused AWS EC2 capacity and are less expensive than On-Demand Instances. You can use Spot Instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads. AWS EC2 can terminate a Spot Instance at any time. AWS gives a two-minute warning to the user when an interruption occurs. OpenShift Container Platform begins to remove the workloads from the affected instances when AWS issues the termination warning. Interruptions can occur when using Spot Instances for the following reasons: The instance price exceeds your maximum price The demand for Spot Instances increases The supply of Spot Instances decreases When AWS terminates an instance, a termination handler running on the Spot Instance node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a Spot Instance. 2.1.7.1. Creating Spot Instances by using compute machine sets You can launch a Spot Instance on AWS by adding spotMarketOptions to your compute machine set YAML file. Procedure Add the following line under the providerSpec field: providerSpec: value: spotMarketOptions: {} You can optionally set the spotMarketOptions.maxPrice field to limit the cost of the Spot Instance. For example you can set maxPrice: '2.50' . If the maxPrice is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to charge up to the On-Demand Instance price. Note It is strongly recommended to use the default On-Demand price as the maxPrice value and to not set the maximum price for Spot Instances. 2.1.8. Adding a GPU node to an existing OpenShift Container Platform cluster You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the AWS EC2 cloud provider. For more information about the supported instance types, see the following NVIDIA documentation: NVIDIA GPU Operator Community support matrix NVIDIA AI Enterprise support matrix Procedure View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific AWS region and OpenShift Container Platform role. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.31.3 ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.31.3 ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.31.3 ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.31.3 ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.31.3 ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.31.3 View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the AWS region. The installer automatically load balances compute machines across availability zones. USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE preserve-dsoc12r4-ktjfc-worker-us-east-2a 1 1 1 1 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b 2 2 2 2 3d11h View the machines that exist in the openshift-machine-api namespace by running the following command. At this time, there is only one compute machine per machine set, though a compute machine set could be scaled to add a node in a particular region and zone. USD oc get machines -n openshift-machine-api | grep worker Example output preserve-dsoc12r4-ktjfc-worker-us-east-2a-dts8r Running m5.xlarge us-east-2 us-east-2a 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-dkv7w Running m5.xlarge us-east-2 us-east-2b 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-k58cw Running m5.xlarge us-east-2 us-east-2b 3d11h Make a copy of one of the existing compute MachineSet definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition. USD oc get machineset preserve-dsoc12r4-ktjfc-worker-us-east-2a -n openshift-machine-api -o json > <output_file.json> Edit the JSON file and make the following changes to the new MachineSet definition: Replace worker with gpu . This will be the name of the new machine set. Change the instance type of the new MachineSet definition to g4dn , which includes an NVIDIA Tesla T4 GPU. To learn more about AWS g4dn instance types, see Accelerated Computing . USD jq .spec.template.spec.providerSpec.value.instanceType preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json "g4dn.xlarge" The <output_file.json> file is saved as preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json . Update the following fields in preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json : .metadata.name to a name containing gpu . .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . .spec.template.spec.providerSpec.value.instanceType to g4dn.xlarge . To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command: USD oc -n openshift-machine-api get preserve-dsoc12r4-ktjfc-worker-us-east-2a -o json | diff preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json - Example output 10c10 < "name": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a", --- > "name": "preserve-dsoc12r4-ktjfc-worker-us-east-2a", 21c21 < "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a" --- > "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a" 31c31 < "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a" --- > "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a" 60c60 < "instanceType": "g4dn.xlarge", --- > "instanceType": "m5.xlarge", Create the GPU-enabled compute machine set from the definition by running the following command: USD oc create -f preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json Example output machineset.machine.openshift.io/preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a created Verification View the machine set you created by running the following command: USD oc -n openshift-machine-api get machinesets | grep gpu The MachineSet replica count is set to 1 so a new Machine object is created automatically. Example output preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a 1 1 1 1 4m21s View the Machine object that the machine set created by running the following command: USD oc -n openshift-machine-api get machines | grep gpu Example output preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a running g4dn.xlarge us-east-2 us-east-2a 4m36s Note that there is no need to specify a namespace for the node. The node definition is cluster scoped. 2.1.9. Deploying the Node Feature Discovery Operator After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform. Procedure Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console. After installing the NFD Operator into OperatorHub , select Node Feature Discovery from the installed Operators list and select Create instance . This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace. Verify that the Operator is installed and running by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d Browse to the installed Oerator in the console and select Create Node Feature Discovery . Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them. Verification After a successful build, verify that a NFD pod is running on each nodes by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de . View the NVIDIA GPU discovered by the NFD Operator by running the following command: USD oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci' Example output Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true 10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet. 2.2. Creating a compute machine set on Azure You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.2.1. Sample YAML for a compute machine set custom resource on Azure This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg sshPrivateKey: "" sshPublicKey: "" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: "1" 8 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 Specify the node label to add. 3 Specify the infrastructure ID, node label, and region. 4 Specify the image details for your compute machine set. If you want to use an Azure Marketplace image, see "Using the Azure Marketplace offering". 5 Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 6 Specify the region to place machines on. 7 Optional: Specify custom tags in your machine set. Provide the tag name in <custom_tag_name> field and the corresponding tag value in <custom_tag_value> field. 8 Specify the zone within your region to place machines on. Ensure that your region supports the zone that you specify. Important If your region supports availability zones, you must specify the zone. Specifying the zone avoids volume node affinity failure when a pod requires a persistent volume attachment. To do this, you can create a compute machine set for each zone in the same region. 2.2.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.2.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.2.4. Using the Azure Marketplace offering You can create a machine set running on Azure that deploys machines that use the Azure Marketplace offering. To use this offering, you must first obtain the Azure Marketplace image. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 Note Use the latest image that is available for compute and control plane nodes. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer, specifically the values for publisher , offer , sku , and version . Add the following parameters to the providerSpec section of your machine set YAML file using the image details for your offer: Sample providerSpec image values for Azure Marketplace machines providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: "" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700 2.2.5. Enabling Azure boot diagnostics You can enable boot diagnostics on Azure machines that your machine set creates. Prerequisites Have an existing Microsoft Azure cluster. Procedure Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file: For an Azure Managed storage account: providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1 1 Specifies an Azure Managed storage account. For an Azure Unmanaged storage account: providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2 1 Specifies an Azure Unmanaged storage account. 2 Replace <storage-account> with the name of your storage account. Note Only the Azure Blob Storage data service is supported. Verification On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine. 2.2.6. Machine sets that deploy machines as Spot VMs You can save on costs by creating a compute machine set running on Azure that deploys machines as non-guaranteed Spot VMs. Spot VMs utilize unused Azure capacity and are less expensive than standard VMs. You can use Spot VMs for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads. Azure can terminate a Spot VM at any time. Azure gives a 30-second warning to the user when an interruption occurs. OpenShift Container Platform begins to remove the workloads from the affected instances when Azure issues the termination warning. Interruptions can occur when using Spot VMs for the following reasons: The instance price exceeds your maximum price The supply of Spot VMs decreases Azure needs capacity back When Azure terminates an instance, a termination handler running on the Spot VM node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a Spot VM. 2.2.6.1. Creating Spot VMs by using compute machine sets You can launch a Spot VM on Azure by adding spotVMOptions to your compute machine set YAML file. Procedure Add the following line under the providerSpec field: providerSpec: value: spotVMOptions: {} You can optionally set the spotVMOptions.maxPrice field to limit the cost of the Spot VM. For example you can set maxPrice: '0.98765' . If the maxPrice is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to -1 and charges up to the standard VM price. Azure caps Spot VM prices at the standard price. Azure will not evict an instance due to pricing if the instance is set with the default maxPrice . However, an instance can still be evicted due to capacity restrictions. Note It is strongly recommended to use the default standard VM price as the maxPrice value and to not set the maximum price for Spot VMs. 2.2.7. Machine sets that deploy machines on Ephemeral OS disks You can create a compute machine set running on Azure that deploys machines on Ephemeral OS disks. Ephemeral OS disks use local VM capacity rather than remote Azure Storage. This configuration therefore incurs no additional cost and provides lower latency for reading, writing, and reimaging. Additional resources For more information, see the Microsoft Azure documentation about Ephemeral OS disks for Azure VMs . 2.2.7.1. Creating machines on Ephemeral OS disks by using compute machine sets You can launch machines on Ephemeral OS disks on Azure by editing your compute machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Edit the custom resource (CR) by running the following command: USD oc edit machineset <machine-set-name> where <machine-set-name> is the compute machine set that you want to provision machines on Ephemeral OS disks. Add the following to the providerSpec field: providerSpec: value: ... osDisk: ... diskSettings: 1 ephemeralStorageLocation: Local 2 cachingType: ReadOnly 3 managedDisk: storageAccountType: Standard_LRS 4 ... 1 2 3 These lines enable the use of Ephemeral OS disks. 4 Ephemeral OS disks are only supported for VMs or scale set instances that use the Standard LRS storage account type. Important The implementation of Ephemeral OS disk support in OpenShift Container Platform only supports the CacheDisk placement type. Do not change the placement configuration setting. Create a compute machine set using the updated configuration: USD oc create -f <machine-set-config>.yaml Verification On the Microsoft Azure portal, review the Overview page for a machine deployed by the compute machine set, and verify that the Ephemeral OS disk field is set to OS cache placement . 2.2.8. Machine sets that deploy machines with ultra disks as data disks You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads. You can also create a persistent volume claim (PVC) that dynamically binds to a storage class backed by Azure ultra disks and mounts them to pods. Note Data disks do not support the ability to specify disk throughput or disk IOPS. You can configure these properties by using PVCs. Additional resources Microsoft Azure ultra disks documentation Machine sets that deploy machines on ultra disks using CSI PVCs Machine sets that deploy machines on ultra disks using in-tree PVCs 2.2.8.1. Creating machines with ultra disks by using machine sets You can deploy machines with ultra disks on Azure by editing your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Create a custom secret in the openshift-machine-api namespace using the worker data secret by running the following command: USD oc -n openshift-machine-api \ get secret <role>-user-data \ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2 1 Replace <role> with worker . 2 Specify userData.txt as the name of the new custom secret. In a text editor, open the userData.txt file and locate the final } character in the file. On the immediately preceding line, add a , . Create a new line after the , and add the following configuration details: "storage": { "disks": [ 1 { "device": "/dev/disk/azure/scsi1/lun0", 2 "partitions": [ 3 { "label": "lun0p1", 4 "sizeMiB": 1024, 5 "startMiB": 0 } ] } ], "filesystems": [ 6 { "device": "/dev/disk/by-partlabel/lun0p1", "format": "xfs", "path": "/var/lib/lun0p1" } ] }, "systemd": { "units": [ 7 { "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var/lib/lun0p1\nWhat=/dev/disk/by-partlabel/lun0p1\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", 8 "enabled": true, "name": "var-lib-lun0p1.mount" } ] } 1 The configuration details for the disk that you want to attach to a node as an ultra disk. 2 Specify the lun value that is defined in the dataDisks stanza of the machine set you are using. For example, if the machine set contains lun: 0 , specify lun0 . You can initialize multiple data disks by specifying multiple "disks" entries in this configuration file. If you specify multiple "disks" entries, ensure that the lun value for each matches the value in the machine set. 3 The configuration details for a new partition on the disk. 4 Specify a label for the partition. You might find it helpful to use hierarchical names, such as lun0p1 for the first partition of lun0 . 5 Specify the total size in MiB of the partition. 6 Specify the filesystem to use when formatting a partition. Use the partition label to specify the partition. 7 Specify a systemd unit to mount the partition at boot. Use the partition label to specify the partition. You can create multiple partitions by specifying multiple "partitions" entries in this configuration file. If you specify multiple "partitions" entries, you must specify a systemd unit for each. 8 For Where , specify the value of storage.filesystems.path . For What , specify the value of storage.filesystems.device . Extract the disabling template value to a file called disableTemplating.txt by running the following command: USD oc -n openshift-machine-api get secret <role>-user-data \ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt 1 Replace <role> with worker . Combine the userData.txt file and disableTemplating.txt file to create a data secret file by running the following command: USD oc -n openshift-machine-api create secret generic <role>-user-data-x5 \ 1 --from-file=userData=userData.txt \ --from-file=disableTemplating=disableTemplating.txt 1 For <role>-user-data-x5 , specify the name of the secret. Replace <role> with worker . Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command: USD oc edit machineset <machine-set-name> where <machine-set-name> is the machine set that you want to provision machines with ultra disks. Add the following lines in the positions indicated: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4 1 Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value. 2 3 These lines enable the use of ultra disks. For dataDisks , include the entire stanza. 4 Specify the user data secret created earlier. Replace <role> with worker . Create a machine set using the updated configuration by running the following command: USD oc create -f <machine-set-name>.yaml Verification Validate that the machines are created by running the following command: USD oc get machines The machines should be in the Running state. For a machine that is running and has a node attached, validate the partition by running the following command: USD oc debug node/<node-name> -- chroot /host lsblk In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with -- . The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine. steps To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example: apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd 2.2.8.2. Troubleshooting resources for machine sets that enable ultra disks Use the information in this section to understand and recover from issues you might encounter. 2.2.8.2.1. Incorrect ultra disk configuration If an incorrect configuration of the ultraSSDCapability parameter is specified in the machine set, the machine provisioning fails. For example, if the ultraSSDCapability parameter is set to Disabled , but an ultra disk is specified in the dataDisks parameter, the following error message appears: StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set. To resolve this issue, verify that your machine set configuration is correct. 2.2.8.2.2. Unsupported disk parameters If a region, availability zone, or instance size that is not compatible with ultra disks is specified in the machine set, the machine provisioning fails. Check the logs for the following error message: failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>." To resolve this issue, verify that you are using this feature in a supported environment and that your machine set configuration is correct. 2.2.8.2.3. Unable to delete disks If the deletion of ultra disks as data disks is not working as expected, the machines are deleted and the data disks are orphaned. You must delete the orphaned disks manually if desired. 2.2.9. Enabling customer-managed encryption keys for a machine set You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API. An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set. Prerequisites Create an Azure Key Vault instance . Create an instance of a disk encryption set . Grant the disk encryption set access to key vault . Procedure Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example: providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS Additional resources Azure documentation about customer-managed keys 2.2.10. Configuring trusted launch for Azure virtual machines by using machine sets Important Using trusted launch for Azure virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.18 supports trusted launch for Azure virtual machines (VMs). By editing the machine set YAML file, you can configure the trusted launch options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. Note Some feature combinations result in an invalid configuration. Table 2.1. UEFI feature combination compatibility Secure Boot [1] vTPM [2] Valid configuration Enabled Enabled Yes Enabled Disabled Yes Enabled Omitted Yes Disabled Enabled Yes Omitted Enabled Yes Disabled Disabled No Omitted Disabled No Omitted Omitted No Using the secureBoot field. Using the virtualizedTrustedPlatformModule field. For more information about related features and functionality, see the Microsoft Azure documentation about Trusted launch for Azure virtual machines . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field to provide a valid configuration: Sample valid configuration with UEFI Secure Boot and vTPM enabled apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ... 1 Enables the use of trusted launch for Azure virtual machines. This value is required for all valid configurations. 2 Specifies which UEFI security features to use. This section is required for all valid configurations. 3 Enables UEFI Secure Boot. 4 Enables the use of a vTPM. Verification On the Azure portal, review the details for a machine deployed by the machine set and verify that the trusted launch options match the values that you configured. 2.2.11. Configuring Azure confidential virtual machines by using machine sets Important Using Azure confidential virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.18 supports Azure confidential virtual machines (VMs). Note Confidential VMs are currently not supported on 64-bit ARM architectures. By editing the machine set YAML file, you can configure the confidential VM options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. For more information about related features and functionality, see the Microsoft Azure documentation about Confidential virtual machines . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: Sample configuration apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: osDisk: # ... managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # ... securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8 # ... 1 Specifies security profile settings for the managed disk when using a confidential VM. 2 Enables encryption of the Azure VM Guest State (VMGS) blob. This setting requires the use of vTPM. 3 Specifies security profile settings for the confidential VM. 4 Enables the use of confidential VMs. This value is required for all valid configurations. 5 Specifies which UEFI security features to use. This section is required for all valid configurations. 6 Disables UEFI Secure Boot. 7 Enables the use of a vTPM. 8 Specifies an instance type that supports confidential VMs. Verification On the Azure portal, review the details for a machine deployed by the machine set and verify that the confidential VM options match the values that you configured. 2.2.12. Accelerated Networking for Microsoft Azure VMs Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. This enhances network performance. This feature can be enabled during or after installation. 2.2.12.1. Limitations Consider the following limitations when deciding whether to use Accelerated Networking: Accelerated Networking is only supported on clusters where the Machine API is operational. Although the minimum requirement for an Azure worker node is two vCPUs, Accelerated Networking requires an Azure VM size that includes at least four vCPUs. To satisfy this requirement, you can change the value of vmSize in your machine set. For information about Azure VM sizes, see Microsoft Azure documentation . When this feature is enabled on an existing Azure cluster, only newly provisioned nodes are affected. Currently running nodes are not reconciled. To enable the feature on all nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas. 2.2.13. Configuring Capacity Reservation by using machine sets OpenShift Container Platform version 4.18 and later supports on-demand Capacity Reservation with Capacity Reservation groups on Microsoft Azure clusters. You can configure a machine set to deploy machines on any available resources that match the parameters of a capacity request that you define. These parameters specify the VM size, region, and number of instances that you want to reserve. If your Azure subscription quota can accommodate the capacity request, the deployment succeeds. For more information, including limitations and suggested use cases for this Azure instance type, see the Microsoft Azure documentation about On-demand Capacity Reservation . Note You cannot change an existing Capacity Reservation configuration for a machine set. To use a different Capacity Reservation group, you must replace the machine set and the machines that the machine set deployed. Prerequisites You have access to the cluster with cluster-admin privileges. You installed the OpenShift CLI ( oc ). You created a Capacity Reservation group. For more information, see the Microsoft Azure documentation Create a Capacity Reservation . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: Sample configuration apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1 # ... 1 Specify the ID of the Capacity Reservation group that you want the machine set to deploy machines on. Verification To verify machine deployment, list the machines that the machine set created by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> where <machine_set_name> is the name of the compute machine set. In the output, verify that the characteristics of the listed machines match the parameters of your Capacity Reservation. 2.2.14. Adding a GPU node to an existing OpenShift Container Platform cluster You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the Azure cloud provider. The following table lists the validated instance types: vmSize NVIDIA GPU accelerator Maximum number of GPUs Architecture Standard_NC24s_v3 V100 4 x86 Standard_NC4as_T4_v3 T4 1 x86 ND A100 v4 A100 8 x86 Note By default, Azure subscriptions do not have a quota for the Azure instance types with GPU. Customers have to request a quota increase for the Azure instance families listed above. Procedure View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the Azure region. The installer automatically load balances compute machines across availability zones. USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 6h9m myclustername-worker-centralus2 1 1 1 1 6h9m myclustername-worker-centralus3 1 1 1 1 6h9m Make a copy of one of the existing compute MachineSet definitions and output the result to a YAML file by running the following command. This will be the basis for the GPU-enabled compute machine set definition. USD oc get machineset -n openshift-machine-api myclustername-worker-centralus1 -o yaml > machineset-azure.yaml View the content of the machineset: USD cat machineset-azure.yaml Example machineset-azure.yaml file apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: "0" machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" creationTimestamp: "2023-02-06T14:08:19Z" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-worker-centralus1 namespace: openshift-machine-api resourceVersion: "23601" uid: acd56e0c-7612-473a-ae37-8704f34b80de spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: "" publisher: "" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: "" version: "" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: myclustername-vnet zone: "1" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1 Make a copy of the machineset-azure.yaml file by running the following command: USD cp machineset-azure.yaml machineset-azure-gpu.yaml Update the following fields in machineset-azure-gpu.yaml : Change .metadata.name to a name containing gpu . Change .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name. Change .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . Change .spec.template.spec.providerSpec.value.vmSize to Standard_NC4as_T4_v3 . Example machineset-azure-gpu.yaml file apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: "1" machine.openshift.io/memoryMb: "28672" machine.openshift.io/vCPU: "4" creationTimestamp: "2023-02-06T20:27:12Z" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-nc4ast4-gpu-worker-centralus1 namespace: openshift-machine-api resourceVersion: "166285" uid: 4eedce7f-6a57-4abe-b529-031140f02ffa spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: "" publisher: "" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: "" version: "" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_NC4as_T4_v3 vnet: myclustername-vnet zone: "1" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1 To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command: USD diff machineset-azure.yaml machineset-azure-gpu.yaml Example output 14c14 < name: myclustername-worker-centralus1 --- > name: myclustername-nc4ast4-gpu-worker-centralus1 23c23 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 30c30 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 67c67 < vmSize: Standard_D4s_v3 --- > vmSize: Standard_NC4as_T4_v3 Create the GPU-enabled compute machine set from the definition file by running the following command: USD oc create -f machineset-azure-gpu.yaml Example output machineset.machine.openshift.io/myclustername-nc4ast4-gpu-worker-centralus1 created View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the Azure region. The installer automatically load balances compute machines across availability zones. USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE clustername-n6n4r-nc4ast4-gpu-worker-centralus1 1 1 1 1 122m clustername-n6n4r-worker-centralus1 1 1 1 1 8h clustername-n6n4r-worker-centralus2 1 1 1 1 8h clustername-n6n4r-worker-centralus3 1 1 1 1 8h View the machines that exist in the openshift-machine-api namespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone. USD oc get machines -n openshift-machine-api Example output NAME PHASE TYPE REGION ZONE AGE myclustername-master-0 Running Standard_D8s_v3 centralus 2 6h40m myclustername-master-1 Running Standard_D8s_v3 centralus 1 6h40m myclustername-master-2 Running Standard_D8s_v3 centralus 3 6h40m myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running centralus 1 21m myclustername-worker-centralus1-rbh6b Running Standard_D4s_v3 centralus 1 6h38m myclustername-worker-centralus2-dbz7w Running Standard_D4s_v3 centralus 2 6h38m myclustername-worker-centralus3-p9b8c Running Standard_D4s_v3 centralus 3 6h38m View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific Azure region and OpenShift Container Platform role. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION myclustername-master-0 Ready control-plane,master 6h39m v1.31.3 myclustername-master-1 Ready control-plane,master 6h41m v1.31.3 myclustername-master-2 Ready control-plane,master 6h39m v1.31.3 myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.31.3 myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.31.3 myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.31.3 myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.31.3 View the list of compute machine sets: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h Create the GPU-enabled compute machine set from the definition file by running the following command: USD oc create -f machineset-azure-gpu.yaml View the list of compute machine sets: oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h Verification View the machine set you created by running the following command: USD oc get machineset -n openshift-machine-api | grep gpu The MachineSet replica count is set to 1 so a new Machine object is created automatically. Example output myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m View the Machine object that the machine set created by running the following command: USD oc -n openshift-machine-api get machines | grep gpu Example output myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running Standard_NC4as_T4_v3 centralus 1 21m Note There is no need to specify a namespace for the node. The node definition is cluster scoped. 2.2.15. Deploying the Node Feature Discovery Operator After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform. Procedure Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console. After installing the NFD Operator into OperatorHub , select Node Feature Discovery from the installed Operators list and select Create instance . This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace. Verify that the Operator is installed and running by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d Browse to the installed Oerator in the console and select Create Node Feature Discovery . Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them. Verification After a successful build, verify that a NFD pod is running on each nodes by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de . View the NVIDIA GPU discovered by the NFD Operator by running the following command: USD oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci' Example output Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true 10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet. Additional resources Enabling Accelerated Networking during installation 2.2.15.1. Enabling Accelerated Networking on an existing Microsoft Azure cluster You can enable Accelerated Networking on Azure by adding acceleratedNetworking to your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster where the Machine API is operational. Procedure Add the following to the providerSpec field: providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2 1 This line enables Accelerated Networking. 2 Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation . steps To enable the feature on currently running nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas. Verification On the Microsoft Azure portal, review the Networking settings page for a machine provisioned by the machine set, and verify that the Accelerated networking field is set to Enabled . Additional resources Manually scaling a compute machine set 2.3. Creating a compute machine set on Azure Stack Hub You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Microsoft Azure Stack Hub. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.3.1. Sample YAML for a compute machine set custom resource on Azure Stack Hub This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 11 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 12 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: "" sshPublicKey: "" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: "1" 21 1 5 7 13 15 16 17 20 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 3 8 9 11 18 19 Specify the node label to add. 4 6 10 Specify the infrastructure ID, node label, and region. 14 Specify the region to place machines on. 21 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 12 Specify the availability set for the cluster. 2.3.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Create an availability set in which to deploy Azure Stack Hub compute machines. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <availabilitySet> , <clusterID> , and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.3.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.3.4. Enabling Azure boot diagnostics You can enable boot diagnostics on Azure machines that your machine set creates. Prerequisites Have an existing Microsoft Azure Stack Hub cluster. Procedure Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file: For an Azure Managed storage account: providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1 1 Specifies an Azure Managed storage account. For an Azure Unmanaged storage account: providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2 1 Specifies an Azure Unmanaged storage account. 2 Replace <storage-account> with the name of your storage account. Note Only the Azure Blob Storage data service is supported. Verification On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine. 2.3.5. Enabling customer-managed encryption keys for a machine set You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API. An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set. Prerequisites Create an Azure Key Vault instance . Create an instance of a disk encryption set . Grant the disk encryption set access to key vault . Procedure Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example: providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS Additional resources Azure documentation about customer-managed keys 2.4. Creating a compute machine set on GCP You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Google Cloud Platform (GCP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.4.1. Sample YAML for a compute machine set custom resource on GCP This sample YAML defines a compute machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" , where <role> is the node label to add. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI. Infrastructure ID The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Image path The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a Sample GCP MachineSet values apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. 2 For <node> , specify the node label to add. 3 Specify the path to the image that is used in current compute machine sets. To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 5 For <project_name> , specify the name of the GCP project that you use for your cluster. 6 Specifies a single service account. Multiple service accounts are not supported. 2.4.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.4.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.4.4. Configuring persistent disk types by using machine sets You can configure the type of persistent disk that a machine set deploys machines on by editing the machine set YAML file. For more information about persistent disk types, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about persistent disks . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following line under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: disks: type: <pd-disk-type> 1 1 Specify the persistent disk type. Valid values are pd-ssd , pd-standard , and pd-balanced . The default value is pd-standard . Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Type field matches the configured disk type. 2.4.5. Configuring Confidential VM by using machine sets By editing the machine set YAML file, you can configure the Confidential VM options that a machine set uses for machines that it deploys. For more information about Confidential VM features, functions, and compatibility, see the GCP Compute Engine documentation about Confidential VM . Note Confidential VMs are currently not supported on 64-bit ARM architectures. Important OpenShift Container Platform 4.18 does not support some Confidential Compute features, such as Confidential VMs with AMD Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP). Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3 ... 1 Specify whether Confidential VM is enabled. Valid values are Disabled or Enabled . 2 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VM does not support live VM migration. 3 Specify a machine type that supports Confidential VM. Confidential VM supports the N2D and C2D series of machine types. Verification On the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Confidential VM options match the values that you configured. 2.4.6. Machine sets that deploy machines as preemptible VM instances You can save on costs by creating a compute machine set running on GCP that deploys machines as non-guaranteed preemptible VM instances. Preemptible VM instances utilize excess Compute Engine capacity and are less expensive than normal instances. You can use preemptible VM instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads. GCP Compute Engine can terminate a preemptible VM instance at any time. Compute Engine sends a preemption notice to the user indicating that an interruption will occur in 30 seconds. OpenShift Container Platform begins to remove the workloads from the affected instances when Compute Engine issues the preemption notice. An ACPI G3 Mechanical Off signal is sent to the operating system after 30 seconds if the instance is not stopped. The preemptible VM instance is then transitioned to a TERMINATED state by Compute Engine. Interruptions can occur when using preemptible VM instances for the following reasons: There is a system or maintenance event The supply of preemptible VM instances decreases The instance reaches the end of the allotted 24-hour period for preemptible VM instances When GCP terminates an instance, a termination handler running on the preemptible VM instance node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a preemptible VM instance. 2.4.6.1. Creating preemptible VM instances by using compute machine sets You can launch a preemptible VM instance on GCP by adding preemptible to your compute machine set YAML file. Procedure Add the following line under the providerSpec field: providerSpec: value: preemptible: true If preemptible is set to true , the machine is labelled as an interruptable-instance after the instance is launched. 2.4.7. Configuring Shielded VM options by using machine sets By editing the machine set YAML file, you can configure the Shielded VM options that a machine set uses for machines that it deploys. For more information about Shielded VM features and functionality, see the GCP Compute Engine documentation about Shielded VM . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ... 1 In this section, specify any Shielded VM options that you want. 2 Specify whether integrity monitoring is enabled. Valid values are Disabled or Enabled . Note When integrity monitoring is enabled, you must not disable virtual trusted platform module (vTPM). 3 Specify whether UEFI Secure Boot is enabled. Valid values are Disabled or Enabled . 4 Specify whether vTPM is enabled. Valid values are Disabled or Enabled . Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Shielded VM options match the values that you configured. Additional resources What is Shielded VM? Secure Boot Virtual Trusted Platform Module (vTPM) Integrity monitoring 2.4.8. Enabling customer-managed encryption keys for a machine set Google Cloud Platform (GCP) Compute Engine allows users to supply an encryption key to encrypt data on disks at rest. The key is used to encrypt the data encryption key, not to encrypt the customer's data. By default, Compute Engine encrypts this data by using Compute Engine keys. You can enable encryption with a customer-managed key in clusters that use the Machine API. You must first create a KMS key and assign the correct permissions to a service account. The KMS key name, key ring name, and location are required to allow a service account to use your key. Note If you do not want to use a dedicated service account for the KMS encryption, the Compute Engine default service account is used instead. You must grant the default service account permission to access the keys if you do not use a dedicated service account. The Compute Engine default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. Procedure To allow a specific service account to use your KMS key and to grant the service account the correct IAM role, run the following command with your KMS key name, key ring name, and location: USD gcloud kms keys add-iam-policy-binding <key_name> \ --keyring <key_ring_name> \ --location <key_ring_location> \ --member "serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com" \ --role roles/cloudkms.cryptoKeyEncrypterDecrypter Configure the encryption key under the providerSpec field in your machine set YAML file. For example: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5 1 The name of the customer-managed encryption key that is used for the disk encryption. 2 The name of the KMS key ring that the KMS key belongs to. 3 The GCP location in which the KMS key ring exists. 4 Optional: The ID of the project in which the KMS key ring exists. If a project ID is not set, the machine set projectID in which the machine set was created is used. 5 Optional: The service account that is used for the encryption request for the given KMS key. If a service account is not set, the Compute Engine default service account is used. When a new machine is created by using the updated providerSpec object configuration, the disk encryption key is encrypted with the KMS key. 2.4.9. Enabling GPU support for a compute machine set Google Cloud Platform (GCP) Compute Engine enables users to add GPUs to VM instances. Workloads that benefit from access to GPU resources can perform better on compute machines with this feature enabled. OpenShift Container Platform on GCP supports NVIDIA GPU models in the A2 and N1 machine series. Table 2.2. Supported GPU configurations Model name GPU type Machine types [1] NVIDIA A100 nvidia-tesla-a100 a2-highgpu-1g a2-highgpu-2g a2-highgpu-4g a2-highgpu-8g a2-megagpu-16g NVIDIA K80 nvidia-tesla-k80 n1-standard-1 n1-standard-2 n1-standard-4 n1-standard-8 n1-standard-16 n1-standard-32 n1-standard-64 n1-standard-96 n1-highmem-2 n1-highmem-4 n1-highmem-8 n1-highmem-16 n1-highmem-32 n1-highmem-64 n1-highmem-96 n1-highcpu-2 n1-highcpu-4 n1-highcpu-8 n1-highcpu-16 n1-highcpu-32 n1-highcpu-64 n1-highcpu-96 NVIDIA P100 nvidia-tesla-p100 NVIDIA P4 nvidia-tesla-p4 NVIDIA T4 nvidia-tesla-t4 NVIDIA V100 nvidia-tesla-v100 For more information about machine types, including specifications, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about N1 machine series , A2 machine series , and GPU regions and zones availability . You can define which supported GPU to use for an instance by using the Machine API. You can configure machines in the N1 machine series to deploy with one of the supported GPU types. Machines in the A2 machine series come with associated GPUs, and cannot use guest accelerators. Note GPUs for graphics workloads are not supported. Procedure In a text editor, open the YAML file for an existing compute machine set or create a new one. Specify a GPU configuration under the providerSpec field in your compute machine set YAML file. See the following examples of valid configurations: Example configuration for the A2 machine series providerSpec: value: machineType: a2-highgpu-1g 1 onHostMaintenance: Terminate 2 restartPolicy: Always 3 1 Specify the machine type. Ensure that the machine type is included in the A2 machine series. 2 When using GPU support, you must set onHostMaintenance to Terminate . 3 Specify the restart policy for machines deployed by the compute machine set. Allowed values are Always or Never . Example configuration for the N1 machine series providerSpec: value: gpus: - count: 1 1 type: nvidia-tesla-p100 2 machineType: n1-standard-1 3 onHostMaintenance: Terminate 4 restartPolicy: Always 5 1 Specify the number of GPUs to attach to the machine. 2 Specify the type of GPUs to attach to the machine. Ensure that the machine type and GPU type are compatible. 3 Specify the machine type. Ensure that the machine type and GPU type are compatible. 4 When using GPU support, you must set onHostMaintenance to Terminate . 5 Specify the restart policy for machines deployed by the compute machine set. Allowed values are Always or Never . 2.4.10. Adding a GPU node to an existing OpenShift Container Platform cluster You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the GCP cloud provider. The following table lists the validated instance types: Instance type NVIDIA GPU accelerator Maximum number of GPUs Architecture a2-highgpu-1g A100 1 x86 n1-standard-4 T4 1 x86 Procedure Make a copy of an existing MachineSet . In the new copy, change the machine set name in metadata.name and in both instances of machine.openshift.io/cluster-api-machineset . Change the instance type to add the following two lines to the newly copied MachineSet : Example a2-highgpu-1g.json file { "apiVersion": "machine.openshift.io/v1beta1", "kind": "MachineSet", "metadata": { "annotations": { "machine.openshift.io/GPU": "0", "machine.openshift.io/memoryMb": "16384", "machine.openshift.io/vCPU": "4" }, "creationTimestamp": "2023-01-13T17:11:02Z", "generation": 1, "labels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p" }, "name": "myclustername-2pt9p-worker-gpu-a", "namespace": "openshift-machine-api", "resourceVersion": "20185", "uid": "2daf4712-733e-4399-b4b4-d43cb1ed32bd" }, "spec": { "replicas": 1, "selector": { "matchLabels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p", "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" } }, "template": { "metadata": { "labels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p", "machine.openshift.io/cluster-api-machine-role": "worker", "machine.openshift.io/cluster-api-machine-type": "worker", "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" } }, "spec": { "lifecycleHooks": {}, "metadata": {}, "providerSpec": { "value": { "apiVersion": "machine.openshift.io/v1beta1", "canIPForward": false, "credentialsSecret": { "name": "gcp-cloud-credentials" }, "deletionProtection": false, "disks": [ { "autoDelete": true, "boot": true, "image": "projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64", "labels": null, "sizeGb": 128, "type": "pd-ssd" } ], "kind": "GCPMachineProviderSpec", "machineType": "a2-highgpu-1g", "onHostMaintenance": "Terminate", "metadata": { "creationTimestamp": null }, "networkInterfaces": [ { "network": "myclustername-2pt9p-network", "subnetwork": "myclustername-2pt9p-worker-subnet" } ], "preemptible": true, "projectID": "myteam", "region": "us-central1", "serviceAccounts": [ { "email": "[email protected]", "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] } ], "tags": [ "myclustername-2pt9p-worker" ], "userDataSecret": { "name": "worker-user-data" }, "zone": "us-central1-a" } } } } }, "status": { "availableReplicas": 1, "fullyLabeledReplicas": 1, "observedGeneration": 1, "readyReplicas": 1, "replicas": 1 } } View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific GCP region and OpenShift Container Platform role. USD oc get nodes Example output NAME STATUS ROLES AGE VERSION myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.31.3 myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.31.3 myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.31.3 myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.31.3 myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.31.3 myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.31.3 myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.31.3 View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the GCP region. The installer automatically load balances compute machines across availability zones. USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-2pt9p-worker-a 1 1 1 1 8h myclustername-2pt9p-worker-b 1 1 1 1 8h myclustername-2pt9p-worker-c 1 1 8h myclustername-2pt9p-worker-f 0 0 8h View the machines that exist in the openshift-machine-api namespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone. USD oc get machines -n openshift-machine-api | grep worker Example output myclustername-2pt9p-worker-a-mxtnz Running n2-standard-4 us-central1 us-central1-a 8h myclustername-2pt9p-worker-b-9pzzn Running n2-standard-4 us-central1 us-central1-b 8h myclustername-2pt9p-worker-c-6pbg6 Running n2-standard-4 us-central1 us-central1-c 8h Make a copy of one of the existing compute MachineSet definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition. USD oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json > <output_file.json> Edit the JSON file to make the following changes to the new MachineSet definition: Rename the machine set name by inserting the substring gpu in metadata.name and in both instances of machine.openshift.io/cluster-api-machineset . Change the machineType of the new MachineSet definition to a2-highgpu-1g , which includes an NVIDIA A100 GPU. jq .spec.template.spec.providerSpec.value.machineType ocp_4.18_machineset-a2-highgpu-1g.json "a2-highgpu-1g" The <output_file.json> file is saved as ocp_4.18_machineset-a2-highgpu-1g.json . Update the following fields in ocp_4.18_machineset-a2-highgpu-1g.json : Change .metadata.name to a name containing gpu . Change .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . Change .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name . Change .spec.template.spec.providerSpec.value.MachineType to a2-highgpu-1g . Add the following line under machineType : `"onHostMaintenance": "Terminate". For example: "machineType": "a2-highgpu-1g", "onHostMaintenance": "Terminate", To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command: USD oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.18_machineset-a2-highgpu-1g.json - Example output 15c15 < "name": "myclustername-2pt9p-worker-gpu-a", --- > "name": "myclustername-2pt9p-worker-a", 25c25 < "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" --- > "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a" 34c34 < "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" --- > "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a" 59,60c59 < "machineType": "a2-highgpu-1g", < "onHostMaintenance": "Terminate", --- > "machineType": "n2-standard-4", Create the GPU-enabled compute machine set from the definition file by running the following command: USD oc create -f ocp_4.18_machineset-a2-highgpu-1g.json Example output machineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created Verification View the machine set you created by running the following command: USD oc -n openshift-machine-api get machinesets | grep gpu The MachineSet replica count is set to 1 so a new Machine object is created automatically. Example output myclustername-2pt9p-worker-gpu-a 1 1 1 1 5h24m View the Machine object that the machine set created by running the following command: USD oc -n openshift-machine-api get machines | grep gpu Example output myclustername-2pt9p-worker-gpu-a-wxcr6 Running a2-highgpu-1g us-central1 us-central1-a 5h25m Note Note that there is no need to specify a namespace for the node. The node definition is cluster scoped. 2.4.11. Deploying the Node Feature Discovery Operator After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform. Procedure Install the Node Feature Discovery Operator from OperatorHub in the OpenShift Container Platform console. After installing the NFD Operator into OperatorHub , select Node Feature Discovery from the installed Operators list and select Create instance . This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace. Verify that the Operator is installed and running by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d Browse to the installed Oerator in the console and select Create Node Feature Discovery . Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them. Verification After a successful build, verify that a NFD pod is running on each nodes by running the following command: USD oc get pods -n openshift-nfd Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de . View the NVIDIA GPU discovered by the NFD Operator by running the following command: USD oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci' Example output Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true 10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet. 2.5. Creating a compute machine set on IBM Cloud You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on IBM Cloud(R). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.5.1. Sample YAML for a compute machine set custom resource on IBM Cloud This sample YAML defines a compute machine set that runs in a specified IBM Cloud(R) zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 1 5 7 The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 16 The node label to add. 4 6 10 The infrastructure ID, node label, and region. 11 The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation. 12 The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify. 13 Specify the IBM Cloud(R) instance profile . 14 Specify the region to place machines on. 15 The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID. 17 The VPC name. 18 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 2.5.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.5.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.6. Creating a compute machine set on IBM Power Virtual Server You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on IBM Power(R) Virtual Server. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.6.1. Sample YAML for a compute machine set custom resource on IBM Power Virtual Server This sample YAML file defines a compute machine set that runs in a specified IBM Power(R) Virtual Server zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: powervs-credentials image: name: rhcos-<infrastructure_id> 11 type: Name keyPairName: <infrastructure_id>-key kind: PowerVSMachineProviderConfig memoryGiB: 32 network: regex: ^DHCPSERVER[0-9a-z]{32}_PrivateUSD type: RegEx processorType: Shared processors: "0.5" serviceInstance: id: <ibm_power_vs_service_instance_id> type: ID 12 systemType: s922 userDataSecret: name: <role>-user-data 1 5 7 The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 The node label to add. 4 6 10 The infrastructure ID, node label, and region. 11 The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation. 12 The infrastructure ID within your region to place machines on. 2.6.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.6.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.7. Creating a compute machine set on Nutanix You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Nutanix. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.7.1. Sample YAML for a compute machine set custom resource on Nutanix This sample YAML defines a Nutanix compute machine set that creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI ( oc ). Infrastructure ID The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: "" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: 11 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 12 userDataSecret: name: <user_data_secret> 13 vcpuSockets: 4 14 vcpusPerSocket: 1 15 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. 2 Specify the node label to add. 3 Specify the infrastructure ID, node label, and zone. 4 Annotations for the cluster autoscaler. 5 Specifies the boot type that the compute machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Valid values are Legacy , SecureBoot , or UEFI . The default is Legacy . Note You must use the Legacy boot type in OpenShift Container Platform 4.18. 6 Specify one or more Nutanix Prism categories to apply to compute machines. This stanza requires key and value parameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management . 7 Specify a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid , so there is a uuid stanza. 8 Specify the image to use. Use an image from an existing default compute machine set for the cluster. 9 Specify the amount of memory for the cluster in Gi. 10 Specify the Nutanix project that you use for your cluster. In this example, the project type is name , so there is a name stanza. 11 Specify one or more UUID for the Prism Element subnet object. The CIDR IP address prefix for one of the specified subnets must contain the virtual IP addresses that the OpenShift Container Platform cluster uses. A maximum of 32 subnets for each Prism Element failure domain in the cluster is supported. All subnet UUID values must be unique. 12 Specify the size of the system disk in Gi. 13 Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that installation program populates in the default compute machine set. 14 Specify the number of vCPU sockets. 15 Specify the number of vCPUs per socket. 2.7.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.7.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.7.4. Failure domains for Nutanix clusters To add or update the failure domain configuration on a Nutanix cluster, you must make coordinated changes to several resources. The following actions are required: Modify the cluster infrastructure custom resource (CR). Modify the cluster control plane machine set CR. Modify or replace the compute machine set CRs. For more information, see "Adding failure domains to an existing Nutanix cluster" in the Post-installation configuration content. Additional resources Adding failure domains to an existing Nutanix cluster 2.8. Creating a compute machine set on OpenStack You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.8.1. Sample YAML for a compute machine set custom resource on RHOSP This sample YAML defines a compute machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone> 1 5 7 13 15 16 17 18 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 19 Specify the node label to add. 4 6 10 Specify the infrastructure ID and node label. 11 To set a server group policy for the MachineSet, enter the value that is returned from creating a server group . For most deployments, anti-affinity or soft-anti-affinity policies are recommended. 12 Required for deployments to multiple networks. To specify multiple networks, add another entry in the networks array. Also, you must include the network that is used as the primarySubnet value. 14 Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file. 2.8.2. Sample YAML for a compute machine set custom resource that uses SR-IOV on RHOSP If you configured your cluster for single-root I/O virtualization (SR-IOV), you can create compute machine sets that use that technology. This sample YAML defines a compute machine set that uses SR-IOV networks. The nodes that it creates are labeled with node-role.openshift.io/<node_role>: "" In this sample, infrastructure_id is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and node_role is the node label to add. The sample assumes two SR-IOV networks that are named "radio" and "uplink". The networks are used in port definitions in the spec.template.spec.providerSpec.value.ports list. Note Only parameters that are specific to SR-IOV deployments are described in this sample. To review a more general sample, see "Sample YAML for a compute machine set custom resource on RHOSP". An example compute machine set that uses SR-IOV networks apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> configDrive: true 9 1 5 Enter a network UUID for each port. 2 6 Enter a subnet UUID for each port. 3 7 The value of the vnicType parameter must be direct for each port. 4 8 The value of the portSecurity parameter must be false for each port. You cannot set security groups and allowed address pairs for ports when port security is disabled. Setting security groups on the instance applies the groups to all ports that are attached to it. 9 The value of the configDrive parameter must be true . Important After you deploy compute machines that are SR-IOV-capable, you must label them as such. For example, from a command line, enter: USD oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable="true" Note Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix> . The nameSuffix field is required in port definitions. You can enable trunking for each port. Optionally, you can add tags to ports as part of their tags lists. Additional resources Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack 2.8.3. Sample YAML for SR-IOV deployments where port security is disabled To create single-root I/O virtualization (SR-IOV) ports on a network that has port security disabled, define a compute machine set that includes the ports as items in the spec.template.spec.providerSpec.value.ports list. This difference from the standard SR-IOV compute machine set is due to the automatic security group and allowed address pair configuration that occurs for ports that are created by using the network and subnet interfaces. Ports that you define for machines subnets require: Allowed address pairs for the API and ingress virtual IP ports The compute security group Attachment to the machines network and subnet Note Only parameters that are specific to SR-IOV deployments where port security is disabled are described in this sample. To review a more general sample, see Sample YAML for a compute machine set custom resource that uses SR-IOV on RHOSP". An example compute machine set that uses SR-IOV networks and has port security disabled apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data configDrive: true 5 1 Specify allowed address pairs for the API and ingress ports. 2 3 Specify the machines network and subnet. 4 Specify the compute machines security group. 5 The value of the configDrive parameter must be true . Note Trunking is enabled for ports that are created by entries in the networks and subnets lists. The names of ports that are created from these lists follow the pattern <machine_name>-<nameSuffix> . The nameSuffix field is required in port definitions. You can enable trunking for each port. Optionally, you can add tags to ports as part of their tags lists. 2.8.4. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.8.5. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.9. Creating a compute machine set on vSphere You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on VMware vSphere. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.9.1. Sample YAML for a compute machine set custom resource on vSphere This sample YAML defines a compute machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: "<vm_network_name>" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and node label. 6 7 9 Specify the node label to add. 10 Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster. 11 Specify the vSphere VM template to use, such as user-5ddjd-rhcos . 12 Specify the vCenter data center to deploy the compute machine set on. 13 Specify the vCenter datastore to deploy the compute machine set on. 14 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 15 Specify the vSphere resource pool for your VMs. 16 Specify the vCenter server IP or fully qualified domain name. 2.9.2. Minimum required vCenter privileges for compute machine set management To manage compute machine sets in an OpenShift Container Platform cluster on vCenter, you must use an account with privileges to read, create, and delete the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the minimum required privileges. The following table lists the minimum vCenter roles and privileges that are required to create, scale, and delete compute machine sets and to delete machines in your OpenShift Container Platform cluster. Example 2.1. Minimum vCenter roles and privileges required for compute machine set management vSphere object for role When required Required privileges vSphere vCenter Always InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update 1 StorageProfile.View 1 vSphere vCenter Cluster Always Resource.AssignVMToPool vSphere datastore Always Datastore.AllocateSpace Datastore.Browse vSphere Port Group Always Network.Assign Virtual Machine Folder Always VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.Memory VirtualMachine.Config.Settings VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone vSphere vCenter data center If the installation program creates the virtual machine folder Resource.AssignVMToPool VirtualMachine.Provisioning.DeployTemplate 1 The StorageProfile.Update and StorageProfile.View permissions are required only for storage backends that use the Container Storage Interface (CSI). The following table details the permissions and propagation settings that are required for compute machine set management. Example 2.2. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always Not required Listed required privileges vSphere vCenter data center Existing folder Not required ReadOnly permission Installation program creates the folder Required Listed required privileges vSphere vCenter Cluster Always Required Listed required privileges vSphere vCenter datastore Always Not required Listed required privileges vSphere Switch Always Not required ReadOnly permission vSphere Port Group Always Not required Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder Required Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. 2.9.3. Requirements for clusters with user-provisioned infrastructure to use compute machine sets To use compute machine sets on clusters that have user-provisioned infrastructure, you must ensure that you cluster configuration supports using the Machine API. Obtaining the infrastructure ID To create compute machine sets, you must be able to supply the infrastructure ID for your cluster. Procedure To obtain the infrastructure ID for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}' Satisfying vSphere credentials requirements To use compute machine sets, the Machine API must be able to interact with vCenter. Credentials that authorize the Machine API components to interact with vCenter must exist in a secret in the openshift-machine-api namespace. Procedure To determine whether the required credentials exist, run the following command: USD oc get secret \ -n openshift-machine-api vsphere-cloud-credentials \ -o go-template='{{range USDk,USDv := .data}}{{printf "%s: " USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{"\n"}}{{end}}' Sample output <vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user> where <vcenter-server> is the IP address or fully qualified domain name (FQDN) of the vCenter server and <openshift-user> and <openshift-user-password> are the OpenShift Container Platform administrator credentials to use. If the secret does not exist, create it by running the following command: USD oc create secret generic vsphere-cloud-credentials \ -n openshift-machine-api \ --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password> Satisfying Ignition configuration requirements Provisioning virtual machines (VMs) requires a valid Ignition configuration. The Ignition configuration contains the machine-config-server address and a system trust bundle for obtaining further Ignition configurations from the Machine Config Operator. By default, this configuration is stored in the worker-user-data secret in the machine-api-operator namespace. Compute machine sets reference the secret during the machine creation process. Procedure To determine whether the required secret exists, run the following command: USD oc get secret \ -n openshift-machine-api worker-user-data \ -o go-template='{{range USDk,USDv := .data}}{{printf "%s: " USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{"\n"}}{{end}}' Sample output disableTemplating: false userData: 1 { "ignition": { ... }, ... } 1 The full output is omitted here, but should have this format. If the secret does not exist, create it by running the following command: USD oc create secret generic worker-user-data \ -n openshift-machine-api \ --from-file=<installation_directory>/worker.ign where <installation_directory> is the directory that was used to store your installation assets during cluster installation. Additional resources Understanding the Machine Config Operator Installing RHCOS and starting the OpenShift Container Platform bootstrap process 2.9.4. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Note Clusters that are installed with user-provisioned infrastructure have a different networking stack than clusters with infrastructure that is provisioned by the installation program. As a result of this difference, automatic load balancer management is unsupported on clusters that have user-provisioned infrastructure. For these clusters, a compute machine set can only create worker and infra type machines. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Have the necessary permissions to deploy VMs in your vCenter instance and have the required access to the datastore specified. If your cluster uses user-provisioned infrastructure, you have satisfied the specific Machine API requirements for that configuration. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. If you are creating a compute machine set for a cluster that has user-provisioned infrastructure, note the following important values: Example vSphere providerSpec values apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... template: ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: "<vm_network_name>" numCPUs: 4 numCoresPerSocket: 4 snapshot: "" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_data_center_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4 1 The name of the secret in the openshift-machine-api namespace that contains the required vCenter credentials. 2 The name of the RHCOS VM template for your cluster that was created during installation. 3 The name of the secret in the openshift-machine-api namespace that contains the required Ignition configuration credentials. 4 The IP address or fully qualified domain name (FQDN) of the vCenter server. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.9.5. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition 2.9.6. Adding tags to machines by using machine sets OpenShift Container Platform adds a cluster-specific tag to each virtual machine (VM) that it creates. The installation program uses these tags to select the VMs to delete when uninstalling a cluster. In addition to the cluster-specific tags assigned to VMs, you can configure a machine set to add up to 10 additional vSphere tags to the VMs it provisions. Prerequisites You have access to an OpenShift Container Platform cluster installed on vSphere using an account with cluster-admin permissions. You have access to the VMware vCenter console associated with your cluster. You have created a tag in the vCenter console. You have installed the OpenShift CLI ( oc ). Procedure Use the vCenter console to find the tag ID for any tag that you want to add to your machines: Log in to the vCenter console. From the Home menu, click Tags & Custom Attributes . Select a tag that you want to add to your machines. Use the browser URL for the tag that you select to identify the tag ID. Example tag URL https://vcenter.example.com/ui/app/tags/tag/urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL/permissions Example tag ID urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following lines under the providerSpec field: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: tagIDs: 1 - <tag_id_value> 2 # ... 1 Specify a list of up to 10 tags to add to the machines that this machine set provisions. 2 Specify the value of the tag that you want to add to your machines. For example, urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL . 2.9.7. Configuring multiple network interface controllers by using machine sets OpenShift Container Platform clusters on VMware vSphere support connecting up to 10 network interface controllers (NICs) to a node. By configuring multiple NICs, you can provide dedicated network links in the node virtual machines (VMs) for uses such as storage or databases. You can use machine sets to manage this configuration. If you want to use multiple NICs in a vSphere cluster that was not configured to do so during installation, you can use machine sets to implement this configuration. If your cluster was set up during installation to use multiple NICs, machine sets that you create can use your existing failure domain configuration. If your failure domain configuration changes, you can use machine sets to make updates that reflect those changes. Important Configuring multiple NICs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have administrator access to OpenShift CLI ( oc ) for an OpenShift Container Platform cluster on vSphere. Procedure For a cluster that already uses multiple NICs, obtain the following values from the Infrastructure resource by running the following command: USD oc get infrastructure cluster -o=jsonpath={.spec.platformSpec.vsphere.failureDomains} Table 2.3. Required network interface controller values Infrastructure resource value Placeholder value for sample machine set Description failureDomain.topology.networks[0] <vm_network_name_1> The name of the first NIC to use. failureDomain.topology.networks[1] <vm_network_name_2> The name of the second NIC to use. failureDomain.topology.networks[<n-1>] <vm_network_name_n> The name of the n th NIC to use. Collect the name of each NIC in the Infrastructure resource. failureDomain.topology.template <vm_template_name> The vSphere VM template to use. failureDomain.topology.datacenter <vcenter_data_center_name> The vCenter data center to deploy the machine set on. failureDomain.topology.datastore <vcenter_datastore_name> The vCenter datastore to deploy the machine set on. failureDomain.topology.folder <vcenter_vm_folder_path> The path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . failureDomain.topology.computeCluster + /Resources <vsphere_resource_pool> The vSphere resource pool for your VMs. failureDomain.server <vcenter_server_ip> The vCenter server IP or fully qualified domain name (FQDN). In a text editor, open the YAML file for an existing machine set or create a new one. Use a machine set configuration formatted like the following example. For a cluster that currently uses multiple NICs, use the values from the Infrastructure resource to populate the values in the machine set custom resource. For a cluster that is not using multiple NICs, populate the values you want to use in the machine set custom resource. Sample machine set apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: network: devices: 1 - networkName: "<vm_network_name_1>" - networkName: "<vm_network_name_2>" template: <vm_template_name> 2 workspace: datacenter: <vcenter_data_center_name> 3 datastore: <vcenter_datastore_name> 4 folder: <vcenter_vm_folder_path> 5 resourcepool: <vsphere_resource_pool> 6 server: <vcenter_server_ip> 7 # ... 1 Specify a list of up to 10 NICs to use. 2 Specify the vSphere VM template to use, such as user-5ddjd-rhcos . 3 Specify the vCenter data center to deploy the machine set on. 4 Specify the vCenter datastore to deploy the machine set on. 5 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 6 Specify the vSphere resource pool for your VMs. 7 Specify the vCenter server IP or fully qualified domain name (FQDN). 2.10. Creating a compute machine set on bare metal You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on bare metal. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 2.10.1. Sample YAML for a compute machine set custom resource on bare metal This sample YAML defines a compute machine set that runs on bare metal and creates nodes that are labeled with node-role.kubernetes.io/<role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: "" 9 providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 10 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 11 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data-managed 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and node label. 6 7 9 Specify the node label to add. 10 Edit the checksum URL to use the API VIP address. 11 Edit the url URL to use the API VIP address. 2.10.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 2.10.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". Additional resources Cluster autoscaler resource definition | [
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5 placementGroupPartition: <placement_group_partition_number> 6",
"providerSpec: value: metadataServiceOptions: authentication: Required 1",
"providerSpec: placement: tenancy: dedicated",
"providerSpec: value: spotMarketOptions: {}",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.31.3 ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.31.3 ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.31.3 ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.31.3 ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.31.3 ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.31.3",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE preserve-dsoc12r4-ktjfc-worker-us-east-2a 1 1 1 1 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b 2 2 2 2 3d11h",
"oc get machines -n openshift-machine-api | grep worker",
"preserve-dsoc12r4-ktjfc-worker-us-east-2a-dts8r Running m5.xlarge us-east-2 us-east-2a 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-dkv7w Running m5.xlarge us-east-2 us-east-2b 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-k58cw Running m5.xlarge us-east-2 us-east-2b 3d11h",
"oc get machineset preserve-dsoc12r4-ktjfc-worker-us-east-2a -n openshift-machine-api -o json > <output_file.json>",
"jq .spec.template.spec.providerSpec.value.instanceType preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json \"g4dn.xlarge\"",
"oc -n openshift-machine-api get preserve-dsoc12r4-ktjfc-worker-us-east-2a -o json | diff preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json -",
"10c10 < \"name\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\", --- > \"name\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\", 21c21 < \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\" 31c31 < \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\" 60c60 < \"instanceType\": \"g4dn.xlarge\", --- > \"instanceType\": \"m5.xlarge\",",
"oc create -f preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json",
"machineset.machine.openshift.io/preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a created",
"oc -n openshift-machine-api get machinesets | grep gpu",
"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a 1 1 1 1 4m21s",
"oc -n openshift-machine-api get machines | grep gpu",
"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a running g4dn.xlarge us-east-2 us-east-2a 4m36s",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d",
"oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'",
"Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: \"1\" 8",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700",
"providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1",
"providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2",
"providerSpec: value: spotVMOptions: {}",
"oc edit machineset <machine-set-name>",
"providerSpec: value: osDisk: diskSettings: 1 ephemeralStorageLocation: Local 2 cachingType: ReadOnly 3 managedDisk: storageAccountType: Standard_LRS 4",
"oc create -f <machine-set-config>.yaml",
"oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2",
"\"storage\": { \"disks\": [ 1 { \"device\": \"/dev/disk/azure/scsi1/lun0\", 2 \"partitions\": [ 3 { \"label\": \"lun0p1\", 4 \"sizeMiB\": 1024, 5 \"startMiB\": 0 } ] } ], \"filesystems\": [ 6 { \"device\": \"/dev/disk/by-partlabel/lun0p1\", \"format\": \"xfs\", \"path\": \"/var/lib/lun0p1\" } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var/lib/lun0p1\\nWhat=/dev/disk/by-partlabel/lun0p1\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", 8 \"enabled\": true, \"name\": \"var-lib-lun0p1.mount\" } ] }",
"oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt",
"oc -n openshift-machine-api create secret generic <role>-user-data-x5 \\ 1 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt",
"oc edit machineset <machine-set-name>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4",
"oc create -f <machine-set-name>.yaml",
"oc get machines",
"oc debug node/<node-name> -- chroot /host lsblk",
"apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd",
"StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.",
"failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code=\"BadRequest\" Message=\"Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>.\"",
"providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: osDisk: # managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1",
"oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 6h9m myclustername-worker-centralus2 1 1 1 1 6h9m myclustername-worker-centralus3 1 1 1 1 6h9m",
"oc get machineset -n openshift-machine-api myclustername-worker-centralus1 -o yaml > machineset-azure.yaml",
"cat machineset-azure.yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: \"0\" machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" creationTimestamp: \"2023-02-06T14:08:19Z\" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-worker-centralus1 namespace: openshift-machine-api resourceVersion: \"23601\" uid: acd56e0c-7612-473a-ae37-8704f34b80de spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: myclustername-vnet zone: \"1\" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1",
"cp machineset-azure.yaml machineset-azure-gpu.yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: \"1\" machine.openshift.io/memoryMb: \"28672\" machine.openshift.io/vCPU: \"4\" creationTimestamp: \"2023-02-06T20:27:12Z\" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-nc4ast4-gpu-worker-centralus1 namespace: openshift-machine-api resourceVersion: \"166285\" uid: 4eedce7f-6a57-4abe-b529-031140f02ffa spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_NC4as_T4_v3 vnet: myclustername-vnet zone: \"1\" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1",
"diff machineset-azure.yaml machineset-azure-gpu.yaml",
"14c14 < name: myclustername-worker-centralus1 --- > name: myclustername-nc4ast4-gpu-worker-centralus1 23c23 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 30c30 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 67c67 < vmSize: Standard_D4s_v3 --- > vmSize: Standard_NC4as_T4_v3",
"oc create -f machineset-azure-gpu.yaml",
"machineset.machine.openshift.io/myclustername-nc4ast4-gpu-worker-centralus1 created",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE clustername-n6n4r-nc4ast4-gpu-worker-centralus1 1 1 1 1 122m clustername-n6n4r-worker-centralus1 1 1 1 1 8h clustername-n6n4r-worker-centralus2 1 1 1 1 8h clustername-n6n4r-worker-centralus3 1 1 1 1 8h",
"oc get machines -n openshift-machine-api",
"NAME PHASE TYPE REGION ZONE AGE myclustername-master-0 Running Standard_D8s_v3 centralus 2 6h40m myclustername-master-1 Running Standard_D8s_v3 centralus 1 6h40m myclustername-master-2 Running Standard_D8s_v3 centralus 3 6h40m myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running centralus 1 21m myclustername-worker-centralus1-rbh6b Running Standard_D4s_v3 centralus 1 6h38m myclustername-worker-centralus2-dbz7w Running Standard_D4s_v3 centralus 2 6h38m myclustername-worker-centralus3-p9b8c Running Standard_D4s_v3 centralus 3 6h38m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION myclustername-master-0 Ready control-plane,master 6h39m v1.31.3 myclustername-master-1 Ready control-plane,master 6h41m v1.31.3 myclustername-master-2 Ready control-plane,master 6h39m v1.31.3 myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.31.3 myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.31.3 myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.31.3 myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.31.3",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h",
"oc create -f machineset-azure-gpu.yaml",
"get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h",
"oc get machineset -n openshift-machine-api | grep gpu",
"myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m",
"oc -n openshift-machine-api get machines | grep gpu",
"myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running Standard_NC4as_T4_v3 centralus 1 21m",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d",
"oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'",
"Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true",
"providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 11 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 12 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: \"1\" 21",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1",
"providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2",
"providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: disks: type: <pd-disk-type> 1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3",
"providerSpec: value: preemptible: true",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5",
"providerSpec: value: machineType: a2-highgpu-1g 1 onHostMaintenance: Terminate 2 restartPolicy: Always 3",
"providerSpec: value: gpus: - count: 1 1 type: nvidia-tesla-p100 2 machineType: n1-standard-1 3 onHostMaintenance: Terminate 4 restartPolicy: Always 5",
"machineType: a2-highgpu-1g onHostMaintenance: Terminate",
"{ \"apiVersion\": \"machine.openshift.io/v1beta1\", \"kind\": \"MachineSet\", \"metadata\": { \"annotations\": { \"machine.openshift.io/GPU\": \"0\", \"machine.openshift.io/memoryMb\": \"16384\", \"machine.openshift.io/vCPU\": \"4\" }, \"creationTimestamp\": \"2023-01-13T17:11:02Z\", \"generation\": 1, \"labels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\" }, \"name\": \"myclustername-2pt9p-worker-gpu-a\", \"namespace\": \"openshift-machine-api\", \"resourceVersion\": \"20185\", \"uid\": \"2daf4712-733e-4399-b4b4-d43cb1ed32bd\" }, \"spec\": { \"replicas\": 1, \"selector\": { \"matchLabels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\", \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" } }, \"template\": { \"metadata\": { \"labels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\", \"machine.openshift.io/cluster-api-machine-role\": \"worker\", \"machine.openshift.io/cluster-api-machine-type\": \"worker\", \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" } }, \"spec\": { \"lifecycleHooks\": {}, \"metadata\": {}, \"providerSpec\": { \"value\": { \"apiVersion\": \"machine.openshift.io/v1beta1\", \"canIPForward\": false, \"credentialsSecret\": { \"name\": \"gcp-cloud-credentials\" }, \"deletionProtection\": false, \"disks\": [ { \"autoDelete\": true, \"boot\": true, \"image\": \"projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64\", \"labels\": null, \"sizeGb\": 128, \"type\": \"pd-ssd\" } ], \"kind\": \"GCPMachineProviderSpec\", \"machineType\": \"a2-highgpu-1g\", \"onHostMaintenance\": \"Terminate\", \"metadata\": { \"creationTimestamp\": null }, \"networkInterfaces\": [ { \"network\": \"myclustername-2pt9p-network\", \"subnetwork\": \"myclustername-2pt9p-worker-subnet\" } ], \"preemptible\": true, \"projectID\": \"myteam\", \"region\": \"us-central1\", \"serviceAccounts\": [ { \"email\": \"[email protected]\", \"scopes\": [ \"https://www.googleapis.com/auth/cloud-platform\" ] } ], \"tags\": [ \"myclustername-2pt9p-worker\" ], \"userDataSecret\": { \"name\": \"worker-user-data\" }, \"zone\": \"us-central1-a\" } } } } }, \"status\": { \"availableReplicas\": 1, \"fullyLabeledReplicas\": 1, \"observedGeneration\": 1, \"readyReplicas\": 1, \"replicas\": 1 } }",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.31.3 myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.31.3 myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.31.3 myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.31.3 myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.31.3 myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.31.3 myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.31.3",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-2pt9p-worker-a 1 1 1 1 8h myclustername-2pt9p-worker-b 1 1 1 1 8h myclustername-2pt9p-worker-c 1 1 8h myclustername-2pt9p-worker-f 0 0 8h",
"oc get machines -n openshift-machine-api | grep worker",
"myclustername-2pt9p-worker-a-mxtnz Running n2-standard-4 us-central1 us-central1-a 8h myclustername-2pt9p-worker-b-9pzzn Running n2-standard-4 us-central1 us-central1-b 8h myclustername-2pt9p-worker-c-6pbg6 Running n2-standard-4 us-central1 us-central1-c 8h",
"oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json > <output_file.json>",
"jq .spec.template.spec.providerSpec.value.machineType ocp_4.18_machineset-a2-highgpu-1g.json \"a2-highgpu-1g\"",
"\"machineType\": \"a2-highgpu-1g\", \"onHostMaintenance\": \"Terminate\",",
"oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.18_machineset-a2-highgpu-1g.json -",
"15c15 < \"name\": \"myclustername-2pt9p-worker-gpu-a\", --- > \"name\": \"myclustername-2pt9p-worker-a\", 25c25 < \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-a\" 34c34 < \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-a\" 59,60c59 < \"machineType\": \"a2-highgpu-1g\", < \"onHostMaintenance\": \"Terminate\", --- > \"machineType\": \"n2-standard-4\",",
"oc create -f ocp_4.18_machineset-a2-highgpu-1g.json",
"machineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created",
"oc -n openshift-machine-api get machinesets | grep gpu",
"myclustername-2pt9p-worker-gpu-a 1 1 1 1 5h24m",
"oc -n openshift-machine-api get machines | grep gpu",
"myclustername-2pt9p-worker-gpu-a-wxcr6 Running a2-highgpu-1g us-central1 us-central1-a 5h25m",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d",
"oc get pods -n openshift-nfd",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d",
"oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'",
"Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: powervs-credentials image: name: rhcos-<infrastructure_id> 11 type: Name keyPairName: <infrastructure_id>-key kind: PowerVSMachineProviderConfig memoryGiB: 32 network: regex: ^DHCPSERVER[0-9a-z]{32}_PrivateUSD type: RegEx processorType: Shared processors: \"0.5\" serviceInstance: id: <ibm_power_vs_service_instance_id> type: ID 12 systemType: s922 userDataSecret: name: <role>-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: 11 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 12 userDataSecret: name: <user_data_secret> 13 vcpuSockets: 4 14 vcpusPerSocket: 1 15",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone>",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone> configDrive: true 9",
"oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable=\"true\"",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data configDrive: true 5",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'",
"oc get secret -n openshift-machine-api vsphere-cloud-credentials -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'",
"<vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user>",
"oc create secret generic vsphere-cloud-credentials -n openshift-machine-api --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password>",
"oc get secret -n openshift-machine-api worker-user-data -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'",
"disableTemplating: false userData: 1 { \"ignition\": { }, }",
"oc create secret generic worker-user-data -n openshift-machine-api --from-file=<installation_directory>/worker.ign",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" numCPUs: 4 numCoresPerSocket: 4 snapshot: \"\" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_data_center_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"https://vcenter.example.com/ui/app/tags/tag/urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL/permissions",
"urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: tagIDs: 1 - <tag_id_value> 2",
"oc get infrastructure cluster -o=jsonpath={.spec.platformSpec.vsphere.failureDomains}",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: network: devices: 1 - networkName: \"<vm_network_name_1>\" - networkName: \"<vm_network_name_2>\" template: <vm_template_name> 2 workspace: datacenter: <vcenter_data_center_name> 3 datastore: <vcenter_datastore_name> 4 folder: <vcenter_vm_folder_path> 5 resourcepool: <vsphere_resource_pool> 6 server: <vcenter_server_ip> 7",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 10 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 11 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data-managed",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_management/managing-compute-machines-with-the-machine-api |
10.6. Reporting the Incident | 10.6. Reporting the Incident The last part of the incident response plan is reporting the incident. The security team should take notes as the response is happening and report all issues to organizations such as local and federal authorities or multi-vendor software vulnerability portals, such as the Common Vulnerabilities and Exposures site (CVE) at http://cve.mitre.org/ . Depending on the type of legal counsel an enterprise employs, a post-mortem analysis may be required. Even if it is not a functional requirement to a compromise analysis, a post-mortem can prove invaluable in helping to learn how a cracker thinks and how the systems are structured so that future compromises can be prevented. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s1-response-report |
9.3. Managing High-Availability Services | 9.3. Managing High-Availability Services You can manage high-availability services using the Cluster Status Utility , clustat , and the Cluster User Service Administration Utility , clusvcadm . clustat displays the status of a cluster and clusvcadm provides the means to manage high-availability services. This section provides basic information about managing HA services using the clustat and clusvcadm commands. It consists of the following subsections: Section 9.3.1, "Displaying HA Service Status with clustat " Section 9.3.2, "Managing HA Services with clusvcadm " 9.3.1. Displaying HA Service Status with clustat clustat displays cluster-wide status. It shows membership information, quorum view, the state of all high-availability services, and indicates which node the clustat command is being run at (Local). Table 9.1, "Services Status" describes the states that services can be in and are displayed when running clustat . Example 9.3, " clustat Display" shows an example of a clustat display. For more detailed information about running the clustat command see the clustat man page. Table 9.1. Services Status Services Status Description Started The service resources are configured and available on the cluster system that owns the service. Recovering The service is pending start on another node. Disabled The service has been disabled, and does not have an assigned owner. A disabled service is never restarted automatically by the cluster. Stopped In the stopped state, the service will be evaluated for starting after the service or node transition. This is a temporary state. You may disable or enable the service from this state. Failed The service is presumed dead. A service is placed into this state whenever a resource's stop operation fails. After a service is placed into this state, you must verify that there are no resources allocated (mounted file systems, for example) prior to issuing a disable request. The only operation that can take place when a service has entered this state is disable . Uninitialized This state can appear in certain cases during startup and running clustat -f . Example 9.3. clustat Display | [
"clustat Cluster Status for mycluster @ Wed Nov 17 05:40:15 2010 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ node-03.example.com 3 Online, rgmanager node-02.example.com 2 Online, rgmanager node-01.example.com 1 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:example_apache node-01.example.com started service:example_apache2 (none) disabled"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-admin-manage-ha-services-cli-CA |
Chapter 11. Secondary networks | Chapter 11. Secondary networks You can configure the Network Observability Operator to collect and enrich network flow data from secondary networks, such as SR-IOV and OVN-Kubernetes. Prerequisites Access to an OpenShift Container Platform cluster with an additional network interface, such as a secondary interface or an L2 network. 11.1. Configuring monitoring for SR-IOV interface traffic In order to collect traffic from a cluster with a Single Root I/O Virtualization (SR-IOV) device, you must set the FlowCollector spec.agent.ebpf.privileged field to true . Then, the eBPF agent monitors other network namespaces in addition to the host network namespaces, which are monitored by default. When a pod with a virtual functions (VF) interface is created, a new network namespace is created. With SRIOVNetwork policy IPAM configurations specified, the VF interface is migrated from the host network namespace to the pod network namespace. Prerequisites Access to an OpenShift Container Platform cluster with a SR-IOV device. The SRIOVNetwork custom resource (CR) spec.ipam configuration must be set with an IP address from the range that the interface lists or from other plugins. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster and then select the YAML tab. Configure the FlowCollector custom resource. A sample configuration is as follows: Configure FlowCollector for SR-IOV monitoring apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: privileged: true 1 1 The spec.agent.ebpf.privileged field value must be set to true to enable SR-IOV monitoring. Additional resources * Creating an additional SR-IOV network attachment with the CNI VRF plugin . 11.2. Configuring virtual machine (VM) secondary network interfaces for Network Observability You can observe network traffic on an OpenShift Virtualization setup by identifying eBPF-enriched network flows coming from VMs that are connected to secondary networks, such as through OVN-Kubernetes. Network flows coming from VMs that are connected to the default internal pod network are automatically captured by Network Observability. Procedure Get information about the virtual machine launcher pod by running the following command. This information is used in Step 5: USD oc get pod virt-launcher-<vm_name>-<suffix> -n <namespace> -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.129.2.39" ], "mac": "0a:58:0a:81:02:27", "default": true, "dns": {} }, { "name": "my-vms/l2-network", 1 "interface": "podc0f69e19ba2", 2 "ips": [ 3 "10.10.10.15" ], "mac": "02:fb:f8:00:00:12", 4 "dns": {} }] name: virt-launcher-fedora-aqua-fowl-13-zr2x9 namespace: my-vms spec: # ... status: # ... 1 The name of the secondary network. 2 The network interface name of the secondary network. 3 The list of IPs used by the secondary network. 4 The MAC address used for secondary network. In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster and then select the YAML tab. Configure FlowCollector based on the information you found from the additional network investigation: apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: ebpf: privileged: true 1 processor: advanced: secondaryNetworks: - index: 2 - MAC 3 name: my-vms/l2-network 4 # ... <.> Ensure that the eBPF agent is in privileged mode so that flows are collected for secondary interfaces. <.> Define the fields to use for indexing the virtual machine launcher pods. It is recommended to use the MAC address as the indexing field to get network flows enrichment for secondary interfaces. If you have overlapping MAC address between pods, then additional indexing fields, such as IP and Interface , could be added to have accurate enrichment. <.> If your additional network information has a MAC address, add MAC to the field list. <.> Specify the name of the network found in the k8s.v1.cni.cncf.io/network-status annotation. Usually <namespace>/<network_attachement_definition_name>. Observe VM traffic: Navigate to the Network Traffic page. Filter by Source IP using your virtual machine IP found in k8s.v1.cni.cncf.io/network-status annotation. View both Source and Destination fields, which should be enriched, and identify the VM launcher pods and the VM instance as owners. | [
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: privileged: true 1",
"oc get pod virt-launcher-<vm_name>-<suffix> -n <namespace> -o yaml",
"apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.129.2.39\" ], \"mac\": \"0a:58:0a:81:02:27\", \"default\": true, \"dns\": {} }, { \"name\": \"my-vms/l2-network\", 1 \"interface\": \"podc0f69e19ba2\", 2 \"ips\": [ 3 \"10.10.10.15\" ], \"mac\": \"02:fb:f8:00:00:12\", 4 \"dns\": {} }] name: virt-launcher-fedora-aqua-fowl-13-zr2x9 namespace: my-vms spec: status:",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: ebpf: privileged: true 1 processor: advanced: secondaryNetworks: - index: 2 - MAC 3 name: my-vms/l2-network 4"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/network_observability/network-observability-secondary-networks |
Appendix A. Broker configuration parameters | Appendix A. Broker configuration parameters advertised.host.name Type: string Default: null Importance: high Dynamic update: read-only DEPRECATED: only used when advertised.listeners or listeners are not set. Use advertised.listeners instead. Hostname to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, it will use the value for host.name if configured. Otherwise it will use the value returned from java.net.InetAddress.getCanonicalHostName(). advertised.listeners Type: string Default: null Importance: high Dynamic update: per-broker Listeners to publish to ZooKeeper for clients to use, if different than the listeners config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for listeners will be used. Unlike listeners , it is not valid to advertise the 0.0.0.0 meta-address. Also unlike listeners , there can be duplicated ports in this property, so that one listener can be configured to advertise another listener's address. This can be useful in some cases where external load balancers are used. advertised.port Type: int Default: null Importance: high Dynamic update: read-only DEPRECATED: only used when advertised.listeners or listeners are not set. Use advertised.listeners instead. The port to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the port to which the broker binds. If this is not set, it will publish the same port that the broker binds to. auto.create.topics.enable Type: boolean Default: true Importance: high Dynamic update: read-only Enable auto creation of topic on the server. auto.leader.rebalance.enable Type: boolean Default: true Importance: high Dynamic update: read-only Enables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable by leader.imbalance.check.interval.seconds . If the leader imbalance exceeds leader.imbalance.per.broker.percentage , leader rebalance to the preferred leader for partitions is triggered. background.threads Type: int Default: 10 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads to use for various background processing tasks. broker.id Type: int Default: -1 Importance: high Dynamic update: read-only The broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between zookeeper generated broker id's and user configured broker id's, generated broker ids start from reserved.broker.max.id + 1. compression.type Type: string Default: producer Importance: high Dynamic update: cluster-wide Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer. control.plane.listener.name Type: string Default: null Importance: high Dynamic update: read-only Name of listener used for communication between controller and brokers. Broker will use the control.plane.listener.name to locate the endpoint in listeners list, to listen for connections from the controller. For example, if a broker's config is : listeners = INTERNAL://192.1.1.8:9092, EXTERNAL://10.1.1.5:9093, CONTROLLER://192.1.1.8:9094 listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL control.plane.listener.name = CONTROLLER On startup, the broker will start listening on "192.1.1.8:9094" with security protocol "SSL". On controller side, when it discovers a broker's published endpoints through zookeeper, it will use the control.plane.listener.name to find the endpoint, which it will use to establish connection to the broker. For example, if the broker's published endpoints on zookeeper are : "endpoints" : ["INTERNAL://broker1.example.com:9092","EXTERNAL://broker1.example.com:9093","CONTROLLER://broker1.example.com:9094"] and the controller's config is : listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL control.plane.listener.name = CONTROLLER then controller will use "broker1.example.com:9094" with security protocol "SSL" to connect to the broker. If not explicitly configured, the default value will be null and there will be no dedicated endpoints for controller connections. delete.topic.enable Type: boolean Default: true Importance: high Dynamic update: read-only Enables delete topic. Delete topic through the admin tool will have no effect if this config is turned off. host.name Type: string Default: "" Importance: high Dynamic update: read-only DEPRECATED: only used when listeners is not set. Use listeners instead. hostname of broker. If this is set, it will only bind to this address. If this is not set, it will bind to all interfaces. leader.imbalance.check.interval.seconds Type: long Default: 300 Importance: high Dynamic update: read-only The frequency with which the partition rebalance check is triggered by the controller. leader.imbalance.per.broker.percentage Type: int Default: 10 Importance: high Dynamic update: read-only The ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage. listeners Type: string Default: null Importance: high Dynamic update: per-broker Listener List - Comma-separated list of URIs we will listen on and the listener names. If the listener name is not a security protocol, listener.security.protocol.map must also be set. Listener names and port numbers must be unique. Specify hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of legal listener lists: PLAINTEXT://myhost:9092,SSL://:9091 CLIENT://0.0.0.0:9092,REPLICATION://localhost:9093. log.dir Type: string Default: /tmp/kafka-logs Importance: high Dynamic update: read-only The directory in which the log data is kept (supplemental for log.dirs property). log.dirs Type: string Default: null Importance: high Dynamic update: read-only The directories in which the log data is kept. If not set, the value in log.dir is used. log.flush.interval.messages Type: long Default: 9223372036854775807 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of messages accumulated on a log partition before messages are flushed to disk. log.flush.interval.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used. log.flush.offset.checkpoint.interval.ms Type: int Default: 60000 (1 minute) Valid Values: [0,... ] Importance: high Dynamic update: read-only The frequency with which we update the persistent record of the last flush which acts as the log recovery point. log.flush.scheduler.interval.ms Type: long Default: 9223372036854775807 Importance: high Dynamic update: read-only The frequency in ms that the log flusher checks whether any log needs to be flushed to disk. log.flush.start.offset.checkpoint.interval.ms Type: int Default: 60000 (1 minute) Valid Values: [0,... ] Importance: high Dynamic update: read-only The frequency with which we update the persistent record of log start offset. log.retention.bytes Type: long Default: -1 Importance: high Dynamic update: cluster-wide The maximum size of the log before deleting it. log.retention.hours Type: int Default: 168 Importance: high Dynamic update: read-only The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property. log.retention.minutes Type: int Default: null Importance: high Dynamic update: read-only The number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is used. log.retention.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied. log.roll.hours Type: int Default: 168 Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property. log.roll.jitter.hours Type: int Default: 0 Valid Values: [0,... ] Importance: high Dynamic update: read-only The maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property. log.roll.jitter.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used. log.roll.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is used. log.segment.bytes Type: int Default: 1073741824 (1 gibibyte) Valid Values: [14,... ] Importance: high Dynamic update: cluster-wide The maximum size of a single log file. log.segment.delete.delay.ms Type: long Default: 60000 (1 minute) Valid Values: [0,... ] Importance: high Dynamic update: cluster-wide The amount of time to wait before deleting a file from the filesystem. message.max.bytes Type: int Default: 1048588 Valid Values: [0,... ] Importance: high Dynamic update: cluster-wide The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config. min.insync.replicas Type: int Default: 1 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write. num.io.threads Type: int Default: 8 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads that the server uses for processing requests, which may include disk I/O. num.network.threads Type: int Default: 3 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads that the server uses for receiving requests from the network and sending responses to the network. num.recovery.threads.per.data.dir Type: int Default: 1 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. num.replica.alter.log.dirs.threads Type: int Default: null Importance: high Dynamic update: read-only The number of threads that can move replicas between log directories, which may include disk I/O. num.replica.fetchers Type: int Default: 1 Importance: high Dynamic update: cluster-wide Number of fetcher threads used to replicate messages from a source broker. Increasing this value can increase the degree of I/O parallelism in the follower broker. offset.metadata.max.bytes Type: int Default: 4096 (4 kibibytes) Importance: high Dynamic update: read-only The maximum size for a metadata entry associated with an offset commit. offsets.commit.required.acks Type: short Default: -1 Importance: high Dynamic update: read-only The required acks before the commit can be accepted. In general, the default (-1) should not be overridden. offsets.commit.timeout.ms Type: int Default: 5000 (5 seconds) Valid Values: [1,... ] Importance: high Dynamic update: read-only Offset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout. offsets.load.buffer.size Type: int Default: 5242880 Valid Values: [1,... ] Importance: high Dynamic update: read-only Batch size for reading from the offsets segments when loading offsets into the cache (soft-limit, overridden if records are too large). offsets.retention.check.interval.ms Type: long Default: 600000 (10 minutes) Valid Values: [1,... ] Importance: high Dynamic update: read-only Frequency at which to check for stale offsets. offsets.retention.minutes Type: int Default: 10080 Valid Values: [1,... ] Importance: high Dynamic update: read-only After a consumer group loses all its consumers (i.e. becomes empty) its offsets will be kept for this retention period before getting discarded. For standalone consumers (using manual assignment), offsets will be expired after the time of last commit plus this retention period. offsets.topic.compression.codec Type: int Default: 0 Importance: high Dynamic update: read-only Compression codec for the offsets topic - compression may be used to achieve "atomic" commits. offsets.topic.num.partitions Type: int Default: 50 Valid Values: [1,... ] Importance: high Dynamic update: read-only The number of partitions for the offset commit topic (should not change after deployment). offsets.topic.replication.factor Type: short Default: 3 Valid Values: [1,... ] Importance: high Dynamic update: read-only The replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement. offsets.topic.segment.bytes Type: int Default: 104857600 (100 mebibytes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads. port Type: int Default: 9092 Importance: high Dynamic update: read-only DEPRECATED: only used when listeners is not set. Use listeners instead. the port to listen and accept connections on. queued.max.requests Type: int Default: 500 Valid Values: [1,... ] Importance: high Dynamic update: read-only The number of queued requests allowed for data-plane, before blocking the network threads. quota.consumer.default Type: long Default: 9223372036854775807 Valid Values: [1,... ] Importance: high Dynamic update: read-only DEPRECATED: Used only when dynamic default quotas are not configured for <user, <client-id> or <user, client-id> in Zookeeper. Any consumer distinguished by clientId/consumer group will get throttled if it fetches more bytes than this value per-second. quota.producer.default Type: long Default: 9223372036854775807 Valid Values: [1,... ] Importance: high Dynamic update: read-only DEPRECATED: Used only when dynamic default quotas are not configured for <user>, <client-id> or <user, client-id> in Zookeeper. Any producer distinguished by clientId will get throttled if it produces more bytes than this value per-second. replica.fetch.min.bytes Type: int Default: 1 Importance: high Dynamic update: read-only Minimum bytes expected for each fetch response. If not enough bytes, wait up to replica.fetch.wait.max.ms (broker config). replica.fetch.wait.max.ms Type: int Default: 500 Importance: high Dynamic update: read-only max wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topics. replica.high.watermark.checkpoint.interval.ms Type: long Default: 5000 (5 seconds) Importance: high Dynamic update: read-only The frequency with which the high watermark is saved out to disk. replica.lag.time.max.ms Type: long Default: 30000 (30 seconds) Importance: high Dynamic update: read-only If a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr. replica.socket.receive.buffer.bytes Type: int Default: 65536 (64 kibibytes) Importance: high Dynamic update: read-only The socket receive buffer for network requests. replica.socket.timeout.ms Type: int Default: 30000 (30 seconds) Importance: high Dynamic update: read-only The socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms. request.timeout.ms Type: int Default: 30000 (30 seconds) Importance: high Dynamic update: read-only The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. socket.receive.buffer.bytes Type: int Default: 102400 (100 kibibytes) Importance: high Dynamic update: read-only The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. socket.request.max.bytes Type: int Default: 104857600 (100 mebibytes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum number of bytes in a socket request. socket.send.buffer.bytes Type: int Default: 102400 (100 kibibytes) Importance: high Dynamic update: read-only The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. transaction.max.timeout.ms Type: int Default: 900000 (15 minutes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum allowed timeout for transactions. If a client's requested transaction time exceed this, then the broker will return an error in InitProducerIdRequest. This prevents a client from too large of a timeout, which can stall consumers reading from topics included in the transaction. transaction.state.log.load.buffer.size Type: int Default: 5242880 Valid Values: [1,... ] Importance: high Dynamic update: read-only Batch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, overridden if records are too large). transaction.state.log.min.isr Type: int Default: 2 Valid Values: [1,... ] Importance: high Dynamic update: read-only Overridden min.insync.replicas config for the transaction topic. transaction.state.log.num.partitions Type: int Default: 50 Valid Values: [1,... ] Importance: high Dynamic update: read-only The number of partitions for the transaction topic (should not change after deployment). transaction.state.log.replication.factor Type: short Default: 3 Valid Values: [1,... ] Importance: high Dynamic update: read-only The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement. transaction.state.log.segment.bytes Type: int Default: 104857600 (100 mebibytes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads. transactional.id.expiration.ms Type: int Default: 604800000 (7 days) Valid Values: [1,... ] Importance: high Dynamic update: read-only The time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transaction before expiring its transactional id. This setting also influences producer id expiration - producer ids are expired once this time has elapsed after the last write with the given producer id. Note that producer ids may expire sooner if the last write from the producer id is deleted due to the topic's retention settings. unclean.leader.election.enable Type: boolean Default: false Importance: high Dynamic update: cluster-wide Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss. zookeeper.connect Type: string Default: null Importance: high Dynamic update: read-only Specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form hostname1:port1,hostname2:port2,hostname3:port3 . The server can also have a ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. For example to give a chroot path of /chroot/path you would give the connection string as hostname1:port1,hostname2:port2,hostname3:port3/chroot/path . zookeeper.connection.timeout.ms Type: int Default: null Importance: high Dynamic update: read-only The max time that the client waits to establish a connection to zookeeper. If not set, the value in zookeeper.session.timeout.ms is used. zookeeper.max.in.flight.requests Type: int Default: 10 Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum number of unacknowledged requests the client will send to Zookeeper before blocking. zookeeper.session.timeout.ms Type: int Default: 18000 (18 seconds) Importance: high Dynamic update: read-only Zookeeper session timeout. zookeeper.set.acl Type: boolean Default: false Importance: high Dynamic update: read-only Set client to use secure ACLs. broker.id.generation.enable Type: boolean Default: true Importance: medium Dynamic update: read-only Enable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be reviewed. broker.rack Type: string Default: null Importance: medium Dynamic update: read-only Rack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: RACK1 , us-east-1d . connections.max.idle.ms Type: long Default: 600000 (10 minutes) Importance: medium Dynamic update: read-only Idle connections timeout: the server socket processor threads close the connections that idle more than this. connections.max.reauth.ms Type: long Default: 0 Importance: medium Dynamic update: read-only When explicitly set to a positive number (the default is 0, not a positive number), a session lifetime that will not exceed the configured value will be communicated to v2.2.0 or later clients when they authenticate. The broker will disconnect any such connection that is not re-authenticated within the session lifetime and that is then subsequently used for any purpose other than re-authentication. Configuration names can optionally be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.oauthbearer.connections.max.reauth.ms=3600000. controlled.shutdown.enable Type: boolean Default: true Importance: medium Dynamic update: read-only Enable controlled shutdown of the server. controlled.shutdown.max.retries Type: int Default: 3 Importance: medium Dynamic update: read-only Controlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens. controlled.shutdown.retry.backoff.ms Type: long Default: 5000 (5 seconds) Importance: medium Dynamic update: read-only Before each retry, the system needs time to recover from the state that caused the failure (Controller fail over, replica lag etc). This config determines the amount of time to wait before retrying. controller.socket.timeout.ms Type: int Default: 30000 (30 seconds) Importance: medium Dynamic update: read-only The socket timeout for controller-to-broker channels. default.replication.factor Type: int Default: 1 Importance: medium Dynamic update: read-only default replication factors for automatically created topics. delegation.token.expiry.time.ms Type: long Default: 86400000 (1 day) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The token validity time in miliseconds before the token needs to be renewed. Default value 1 day. delegation.token.master.key Type: password Default: null Importance: medium Dynamic update: read-only DEPRECATED: An alias for delegation.token.secret.key, which should be used instead of this config. delegation.token.max.lifetime.ms Type: long Default: 604800000 (7 days) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days. delegation.token.secret.key Type: password Default: null Importance: medium Dynamic update: read-only Secret key to generate and verify delegation tokens. The same key must be configured across all the brokers. If the key is not set or set to empty string, brokers will disable the delegation token support. delete.records.purgatory.purge.interval.requests Type: int Default: 1 Importance: medium Dynamic update: read-only The purge interval (in number of requests) of the delete records request purgatory. fetch.max.bytes Type: int Default: 57671680 (55 mebibytes) Valid Values: [1024,... ] Importance: medium Dynamic update: read-only The maximum number of bytes we will return for a fetch request. Must be at least 1024. fetch.purgatory.purge.interval.requests Type: int Default: 1000 Importance: medium Dynamic update: read-only The purge interval (in number of requests) of the fetch request purgatory. group.initial.rebalance.delay.ms Type: int Default: 3000 (3 seconds) Importance: medium Dynamic update: read-only The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins. group.max.session.timeout.ms Type: int Default: 1800000 (30 minutes) Importance: medium Dynamic update: read-only The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures. group.max.size Type: int Default: 2147483647 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The maximum number of consumers that a single consumer group can accommodate. group.min.session.timeout.ms Type: int Default: 6000 (6 seconds) Importance: medium Dynamic update: read-only The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources. inter.broker.listener.name Type: string Default: null Importance: medium Dynamic update: read-only Name of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.protocol. It is an error to set this and security.inter.broker.protocol properties at the same time. inter.broker.protocol.version Type: string Default: 2.8-IV1 Valid Values: [0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1] Importance: medium Dynamic update: read-only Specify which version of the inter-broker protocol will be used. This is typically bumped after all brokers were upgraded to a new version. Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check ApiVersion for the full list. log.cleaner.backoff.ms Type: long Default: 15000 (15 seconds) Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The amount of time to sleep when there are no logs to clean. log.cleaner.dedupe.buffer.size Type: long Default: 134217728 Importance: medium Dynamic update: cluster-wide The total memory used for log deduplication across all cleaner threads. log.cleaner.delete.retention.ms Type: long Default: 86400000 (1 day) Importance: medium Dynamic update: cluster-wide How long are delete records retained? log.cleaner.enable Type: boolean Default: true Importance: medium Dynamic update: read-only Enable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size. log.cleaner.io.buffer.load.factor Type: double Default: 0.9 Importance: medium Dynamic update: cluster-wide Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisions. log.cleaner.io.buffer.size Type: int Default: 524288 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The total memory used for log cleaner I/O buffers across all cleaner threads. log.cleaner.io.max.bytes.per.second Type: double Default: 1.7976931348623157E308 Importance: medium Dynamic update: cluster-wide The log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average. log.cleaner.max.compaction.lag.ms Type: long Default: 9223372036854775807 Importance: medium Dynamic update: cluster-wide The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted. log.cleaner.min.cleanable.ratio Type: double Default: 0.5 Importance: medium Dynamic update: cluster-wide The minimum ratio of dirty log to total log for a log to eligible for cleaning. If the log.cleaner.max.compaction.lag.ms or the log.cleaner.min.compaction.lag.ms configurations are also specified, then the log compactor considers the log eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the log.cleaner.min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the log.cleaner.max.compaction.lag.ms period. log.cleaner.min.compaction.lag.ms Type: long Default: 0 Importance: medium Dynamic update: cluster-wide The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted. log.cleaner.threads Type: int Default: 1 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The number of background threads to use for log cleaning. log.cleanup.policy Type: list Default: delete Valid Values: [compact, delete] Importance: medium Dynamic update: cluster-wide The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: "delete" and "compact". log.index.interval.bytes Type: int Default: 4096 (4 kibibytes) Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The interval with which we add an entry to the offset index. log.index.size.max.bytes Type: int Default: 10485760 (10 mebibytes) Valid Values: [4,... ] Importance: medium Dynamic update: cluster-wide The maximum size in bytes of the offset index. log.message.format.version Type: string Default: 2.8-IV1 Valid Values: [0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1] Importance: medium Dynamic update: read-only Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand. log.message.timestamp.difference.max.ms Type: long Default: 9223372036854775807 Importance: medium Dynamic update: cluster-wide The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.The maximum timestamp difference allowed should be no greater than log.retention.ms to avoid unnecessarily frequent log rolling. log.message.timestamp.type Type: string Default: CreateTime Valid Values: [CreateTime, LogAppendTime] Importance: medium Dynamic update: cluster-wide Define whether the timestamp in the message is message create time or log append time. The value should be either CreateTime or LogAppendTime . log.preallocate Type: boolean Default: false Importance: medium Dynamic update: cluster-wide Should pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true. log.retention.check.interval.ms Type: long Default: 300000 (5 minutes) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion. max.connection.creation.rate Type: int Default: 2147483647 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The maximum connection creation rate we allow in the broker at any time. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, listener.name.internal.max.connection.creation.rate .Broker-wide connection rate limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections will be throttled if either the listener or the broker limit is reached, with the exception of inter-broker listener. Connections on the inter-broker listener will be throttled only when the listener-level rate limit is reached. max.connections Type: int Default: 2147483647 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The maximum number of connections we allow in the broker at any time. This limit is applied in addition to any per-ip limits configured using max.connections.per.ip. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, listener.name.internal.max.connections . Broker-wide limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections are blocked if either the listener or broker limit is reached. Connections on the inter-broker listener are permitted even if broker-wide limit is reached. The least recently used connection on another listener will be closed in this case. max.connections.per.ip Type: int Default: 2147483647 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The maximum number of connections we allow from each ip address. This can be set to 0 if there are overrides configured using max.connections.per.ip.overrides property. New connections from the ip address are dropped if the limit is reached. max.connections.per.ip.overrides Type: string Default: "" Importance: medium Dynamic update: cluster-wide A comma-separated list of per-ip or hostname overrides to the default maximum number of connections. An example value is "hostName:100,127.0.0.1:200". max.incremental.fetch.session.cache.slots Type: int Default: 1000 Valid Values: [0,... ] Importance: medium Dynamic update: read-only The maximum number of incremental fetch sessions that we will maintain. num.partitions Type: int Default: 1 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The default number of log partitions per topic. password.encoder.old.secret Type: password Default: null Importance: medium Dynamic update: read-only The old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If specified, all dynamically encoded passwords are decoded using this old secret and re-encoded using password.encoder.secret when broker starts up. password.encoder.secret Type: password Default: null Importance: medium Dynamic update: read-only The secret used for encoding dynamically configured passwords for this broker. principal.builder.class Type: class Default: null Importance: medium Dynamic update: per-broker The fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal object used during authorization. This config also supports the deprecated PrincipalBuilder interface which was previously used for client authentication over SSL. If no principal builder is defined, the default behavior depends on the security protocol in use. For SSL authentication, the principal will be derived using the rules defined by ssl.principal.mapping.rules applied on the distinguished name from the client certificate if one is provided; otherwise, if client authentication is not required, the principal name will be ANONYMOUS. For SASL authentication, the principal will be derived using the rules defined by sasl.kerberos.principal.to.local.rules if GSSAPI is in use, and the SASL authentication ID for other mechanisms. For PLAINTEXT, the principal will be ANONYMOUS. producer.purgatory.purge.interval.requests Type: int Default: 1000 Importance: medium Dynamic update: read-only The purge interval (in number of requests) of the producer request purgatory. queued.max.request.bytes Type: long Default: -1 Importance: medium Dynamic update: read-only The number of queued bytes allowed before no more requests are read. replica.fetch.backoff.ms Type: int Default: 1000 (1 second) Valid Values: [0,... ] Importance: medium Dynamic update: read-only The amount of time to sleep when fetch partition error occurs. replica.fetch.max.bytes Type: int Default: 1048576 (1 mebibyte) Valid Values: [0,... ] Importance: medium Dynamic update: read-only The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). replica.fetch.response.max.bytes Type: int Default: 10485760 (10 mebibytes) Valid Values: [0,... ] Importance: medium Dynamic update: read-only Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). replica.selector.class Type: string Default: null Importance: medium Dynamic update: read-only The fully qualified class name that implements ReplicaSelector. This is used by the broker to find the preferred read replica. By default, we use an implementation that returns the leader. reserved.broker.max.id Type: int Default: 1000 Valid Values: [0,... ] Importance: medium Dynamic update: read-only Max number that can be used for a broker.id. sasl.client.callback.handler.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.enabled.mechanisms Type: list Default: GSSAPI Importance: medium Dynamic update: per-broker The list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default. sasl.jaas.config Type: password Default: null Importance: medium Dynamic update: per-broker JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*; . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: medium Dynamic update: per-broker Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: medium Dynamic update: per-broker Login thread sleep time between refresh attempts. sasl.kerberos.principal.to.local.rules Type: list Default: DEFAULT Importance: medium Dynamic update: per-broker A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}@{REALM} are mapped to {username}. For more details on the format please see security authorization and acls . Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.class configuration. sasl.kerberos.service.name Type: string Default: null Importance: medium Dynamic update: per-broker The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: medium Dynamic update: per-broker Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: medium Dynamic update: per-broker Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.callback.handler.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.login.refresh.buffer.seconds Type: short Default: 300 Importance: medium Dynamic update: per-broker The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Importance: medium Dynamic update: per-broker The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Importance: medium Dynamic update: per-broker Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Importance: medium Dynamic update: per-broker The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.mechanism.inter.broker.protocol Type: string Default: GSSAPI Importance: medium Dynamic update: per-broker SASL mechanism used for inter-broker communication. Default is GSSAPI. sasl.server.callback.handler.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a SASL server callback handler class that implements the AuthenticateCallbackHandler interface. Server callback handlers must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.plain.sasl.server.callback.handler.class=com.example.CustomPlainCallbackHandler. security.inter.broker.protocol Type: string Default: PLAINTEXT Importance: medium Dynamic update: read-only Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error to set this and inter.broker.listener.name properties at the same time. socket.connection.setup.timeout.max.ms Type: long Default: 30000 (30 seconds) Importance: medium Dynamic update: read-only The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value. socket.connection.setup.timeout.ms Type: long Default: 10000 (10 seconds) Importance: medium Dynamic update: read-only The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. ssl.cipher.suites Type: list Default: "" Importance: medium Dynamic update: per-broker A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.client.auth Type: string Default: none Valid Values: [required, requested, none] Importance: medium Dynamic update: per-broker Configures kafka broker to request client authentication. The following settings are common: ssl.client.auth=required If set to required client authentication is required. ssl.client.auth=requested This means client authentication is optional. unlike required, if this option is set client can choose not to provide authentication information about itself ssl.client.auth=none This means client authentication is not needed. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium Dynamic update: per-broker The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.key.password Type: password Default: null Importance: medium Dynamic update: per-broker The password of the private key in the key store file orthe PEM key specified in `ssl.keystore.key'. This is required for clients only if two-way authentication is configured. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: medium Dynamic update: per-broker The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.keystore.certificate.chain Type: password Default: null Importance: medium Dynamic update: per-broker Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates. ssl.keystore.key Type: password Default: null Importance: medium Dynamic update: per-broker Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'. ssl.keystore.location Type: string Default: null Importance: medium Dynamic update: per-broker The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: medium Dynamic update: per-broker The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format. ssl.keystore.type Type: string Default: JKS Importance: medium Dynamic update: per-broker The file format of the key store file. This is optional for client. ssl.protocol Type: string Default: TLSv1.3 Importance: medium Dynamic update: per-broker The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium Dynamic update: per-broker The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: medium Dynamic update: per-broker The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. ssl.truststore.certificates Type: password Default: null Importance: medium Dynamic update: per-broker Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates. ssl.truststore.location Type: string Default: null Importance: medium Dynamic update: per-broker The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: medium Dynamic update: per-broker The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format. ssl.truststore.type Type: string Default: JKS Importance: medium Dynamic update: per-broker The file format of the trust store file. zookeeper.clientCnxnSocket Type: string Default: null Importance: medium Dynamic update: read-only Typically set to org.apache.zookeeper.ClientCnxnSocketNetty when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the same-named zookeeper.clientCnxnSocket system property. zookeeper.ssl.client.enable Type: boolean Default: false Importance: medium Dynamic update: read-only Set client to use TLS when connecting to ZooKeeper. An explicit value overrides any value set via the zookeeper.client.secure system property (note the different name). Defaults to false if neither is set; when true, zookeeper.clientCnxnSocket must be set (typically to org.apache.zookeeper.ClientCnxnSocketNetty ); other values to set may include zookeeper.ssl.cipher.suites , zookeeper.ssl.crl.enable , zookeeper.ssl.enabled.protocols , zookeeper.ssl.endpoint.identification.algorithm , zookeeper.ssl.keystore.location , zookeeper.ssl.keystore.password , zookeeper.ssl.keystore.type , zookeeper.ssl.ocsp.enable , zookeeper.ssl.protocol , zookeeper.ssl.truststore.location , zookeeper.ssl.truststore.password , zookeeper.ssl.truststore.type . zookeeper.ssl.keystore.location Type: string Default: null Importance: medium Dynamic update: read-only Keystore location when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.location system property (note the camelCase). zookeeper.ssl.keystore.password Type: password Default: null Importance: medium Dynamic update: read-only Keystore password when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.password system property (note the camelCase). Note that ZooKeeper does not support a key password different from the keystore password, so be sure to set the key password in the keystore to be identical to the keystore password; otherwise the connection attempt to Zookeeper will fail. zookeeper.ssl.keystore.type Type: string Default: null Importance: medium Dynamic update: read-only Keystore type when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.type system property (note the camelCase). The default value of null means the type will be auto-detected based on the filename extension of the keystore. zookeeper.ssl.truststore.location Type: string Default: null Importance: medium Dynamic update: read-only Truststore location when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.location system property (note the camelCase). zookeeper.ssl.truststore.password Type: password Default: null Importance: medium Dynamic update: read-only Truststore password when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.password system property (note the camelCase). zookeeper.ssl.truststore.type Type: string Default: null Importance: medium Dynamic update: read-only Truststore type when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.type system property (note the camelCase). The default value of null means the type will be auto-detected based on the filename extension of the truststore. alter.config.policy.class.name Type: class Default: null Importance: low Dynamic update: read-only The alter configs policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.AlterConfigPolicy interface. alter.log.dirs.replication.quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples to retain in memory for alter log dirs replication quotas. alter.log.dirs.replication.quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: low Dynamic update: read-only The time span of each sample for alter log dirs replication quotas. authorizer.class.name Type: string Default: "" Importance: low Dynamic update: read-only The fully qualified name of a class that implements sorg.apache.kafka.server.authorizer.Authorizer interface, which is used by the broker for authorization. This config also supports authorizers that implement the deprecated kafka.security.auth.Authorizer trait which was previously used for authorization. client.quota.callback.class Type: class Default: null Importance: low Dynamic update: read-only The fully qualified name of a class that implements the ClientQuotaCallback interface, which is used to determine quota limits applied to client requests. By default, <user, client-id>, <user> or <client-id> quotas stored in ZooKeeper are applied. For any given request, the most specific quota that matches the user principal of the session and the client-id of the request is applied. connection.failed.authentication.delay.ms Type: int Default: 100 Valid Values: [0,... ] Importance: low Dynamic update: read-only Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on authentication failure. This must be configured to be less than connections.max.idle.ms to prevent connection timeout. controller.quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples to retain in memory for controller mutation quotas. controller.quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: low Dynamic update: read-only The time span of each sample for controller mutations quotas. create.topic.policy.class.name Type: class Default: null Importance: low Dynamic update: read-only The create topic policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.CreateTopicPolicy interface. delegation.token.expiry.check.interval.ms Type: long Default: 3600000 (1 hour) Valid Values: [1,... ] Importance: low Dynamic update: read-only Scan interval to remove expired delegation tokens. kafka.metrics.polling.interval.secs Type: int Default: 10 Valid Values: [1,... ] Importance: low Dynamic update: read-only The metrics polling interval (in seconds) which can be used in kafka.metrics.reporters implementations. kafka.metrics.reporters Type: list Default: "" Importance: low Dynamic update: read-only A list of classes to use as Yammer metrics custom reporters. The reporters should implement kafka.metrics.KafkaMetricsReporter trait. If a client wants to expose JMX operations on a custom reporter, the custom reporter needs to additionally implement an MBean trait that extends kafka.metrics.KafkaMetricsReporterMBean trait so that the registered MBean is compliant with the standard MBean convention. listener.security.protocol.map Type: string Default: PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL Importance: low Dynamic update: per-broker Map between listener names and security protocols. This must be defined for the same security protocol to be usable in more than one port or IP. For example, internal and external traffic can be separated even if SSL is required for both. Concretely, the user could define listeners with names INTERNAL and EXTERNAL and this property as: INTERNAL:SSL,EXTERNAL:SSL . As shown, key and value are separated by a colon and map entries are separated by commas. Each listener name should only appear once in the map. Different security (SSL and SASL) settings can be configured for each listener by adding a normalised prefix (the listener name is lowercased) to the config name. For example, to set a different keystore for the INTERNAL listener, a config with name listener.name.internal.ssl.keystore.location would be set. If the config for the listener name is not set, the config will fallback to the generic config (i.e. ssl.keystore.location ). log.message.downconversion.enable Type: boolean Default: true Importance: low Dynamic update: cluster-wide This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false , broker will not perform down-conversion for consumers expecting an older message format. The broker responds with UNSUPPORTED_VERSION error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers. metric.reporters Type: list Default: "" Importance: low Dynamic update: cluster-wide A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Importance: low Dynamic update: read-only The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [1,... ] Importance: low Dynamic update: read-only The window of time a metrics sample is computed over. password.encoder.cipher.algorithm Type: string Default: AES/CBC/PKCS5Padding Importance: low Dynamic update: read-only The Cipher algorithm used for encoding dynamically configured passwords. password.encoder.iterations Type: int Default: 4096 Valid Values: [1024,... ] Importance: low Dynamic update: read-only The iteration count used for encoding dynamically configured passwords. password.encoder.key.length Type: int Default: 128 Valid Values: [8,... ] Importance: low Dynamic update: read-only The key length used for encoding dynamically configured passwords. password.encoder.keyfactory.algorithm Type: string Default: null Importance: low Dynamic update: read-only The SecretKeyFactory algorithm used for encoding dynamically configured passwords. Default is PBKDF2WithHmacSHA512 if available and PBKDF2WithHmacSHA1 otherwise. quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples to retain in memory for client quotas. quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: low Dynamic update: read-only The time span of each sample for client quotas. replication.quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples to retain in memory for replication quotas. replication.quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: low Dynamic update: read-only The time span of each sample for replication quotas. security.providers Type: string Default: null Importance: low Dynamic update: read-only A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low Dynamic update: per-broker The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low Dynamic update: per-broker The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. ssl.principal.mapping.rules Type: string Default: DEFAULT Importance: low Dynamic update: read-only A list of rules for mapping from distinguished name from the client certificate to short name. The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, distinguished name of the X.500 certificate will be the principal. For more details on the format please see security authorization and acls . Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.class configuration. ssl.secure.random.implementation Type: string Default: null Importance: low Dynamic update: per-broker The SecureRandom PRNG implementation to use for SSL cryptography operations. transaction.abort.timed.out.transaction.cleanup.interval.ms Type: int Default: 10000 (10 seconds) Valid Values: [1,... ] Importance: low Dynamic update: read-only The interval at which to rollback transactions that have timed out. transaction.remove.expired.transaction.cleanup.interval.ms Type: int Default: 3600000 (1 hour) Valid Values: [1,... ] Importance: low Dynamic update: read-only The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing. zookeeper.ssl.cipher.suites Type: list Default: null Importance: low Dynamic update: read-only Specifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.ciphersuites system property (note the single word "ciphersuites"). The default value of null means the list of enabled cipher suites is determined by the Java runtime being used. zookeeper.ssl.crl.enable Type: boolean Default: false Importance: low Dynamic update: read-only Specifies whether to enable Certificate Revocation List in the ZooKeeper TLS protocols. Overrides any explicit value set via the zookeeper.ssl.crl system property (note the shorter name). zookeeper.ssl.enabled.protocols Type: list Default: null Importance: low Dynamic update: read-only Specifies the enabled protocol(s) in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.enabledProtocols system property (note the camelCase). The default value of null means the enabled protocol will be the value of the zookeeper.ssl.protocol configuration property. zookeeper.ssl.endpoint.identification.algorithm Type: string Default: HTTPS Importance: low Dynamic update: read-only Specifies whether to enable hostname verification in the ZooKeeper TLS negotiation process, with (case-insensitively) "https" meaning ZooKeeper hostname verification is enabled and an explicit blank value meaning it is disabled (disabling it is only recommended for testing purposes). An explicit value overrides any "true" or "false" value set via the zookeeper.ssl.hostnameVerification system property (note the different name and values; true implies https and false implies blank). zookeeper.ssl.ocsp.enable Type: boolean Default: false Importance: low Dynamic update: read-only Specifies whether to enable Online Certificate Status Protocol in the ZooKeeper TLS protocols. Overrides any explicit value set via the zookeeper.ssl.ocsp system property (note the shorter name). zookeeper.ssl.protocol Type: string Default: TLSv1.2 Importance: low Dynamic update: read-only Specifies the protocol to be used in ZooKeeper TLS negotiation. An explicit value overrides any value set via the same-named zookeeper.ssl.protocol system property. zookeeper.sync.time.ms Type: int Default: 2000 (2 seconds) Importance: low Dynamic update: read-only How far a ZK follower can be behind a ZK leader. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_rhel/broker-configuration-parameters-str |
Chapter 5. Developer previews | Chapter 5. Developer previews This section describes developer preview features introduced in Red Hat OpenShift Data Foundation 4.9. Important Developer preview feature is subject to Developer preview support limitations. Developer preview releases are not intended to be run in production environments. The clusters deployed with the developer preview features are considered to be development clusters and are not supported through the Red Hat Customer Portal case management system. If you need assistance with developer preview features, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. Regional-DR with Advanced Cluster Management Regional-DR solution provides an automated "one-click" recovery in the event of a regional disaster. The protected applications are automatically redeployed to a designated OpenShift Container Platform with OpenShift Data Foundation cluster that is available in another region. For more information, see Configuring Regional-DR with Advanced Cluster Management . Quota support for object data You can now set quota options for object bucket claims (OBC) to avoid resource starvation and increase the usage of the product. You set the quota during the OBC creation using the options maxObjects and maxSize in the custom resource definitions (CRD). You can also update these options after the OBC creation. For more information, see https://access.redhat.com/articles/6541861 . IPv6 support With this release, IPv6 single-stack and dual-stack can be used with OpenShift Data Foundation. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/4.9_release_notes/developer_previews |
Chapter 3. Set Up Eviction | Chapter 3. Set Up Eviction 3.1. About Eviction Eviction is the process of removing entries from memory to prevent running out of memory. Entries that are evicted from memory remain in configured cache stores and the rest of the cluster to prevent permanent data loss. If no cache store is configured, and eviction is enabled, data loss is possible. Red Hat JBoss Data Grid executes eviction tasks by utilizing user threads which are already interacting with the data container. JBoss Data Grid uses a separate thread to prune expired cache entries from the cache. Eviction occurs individually on a per node basis, rather than occurring as a cluster-wide operation. Each node uses an eviction thread to analyze the contents of its in-memory container to determine which entries require eviction. The free memory in the Java Virtual Machine (JVM) is not a consideration during the eviction analysis, even as a threshold to initialize entry eviction. In JBoss Data Grid, eviction provides a mechanism to efficiently remove entries from the in-memory representation of a cache, and removed entries will be pushed to a cache store, if configured. This ensures that the memory can always accommodate new entries as they are fetched and that evicted entries are preserved in the cluster instead of lost. Additionally, eviction strategies can be used as required for your configuration to set up which entries are evicted and when eviction occurs. See Also: Section 4.3, "Eviction and Expiration Comparison" Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-Set_Up_Eviction |
Back Up and Restore the Director Undercloud | Back Up and Restore the Director Undercloud Red Hat OpenStack Platform 16.0 Back up and restore the director undercloud OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/back_up_and_restore_the_director_undercloud/index |
probe::ioscheduler_trace.unplug_io | probe::ioscheduler_trace.unplug_io Name probe::ioscheduler_trace.unplug_io - Fires when a request queue is unplugged; Synopsis ioscheduler_trace.unplug_io Values name Name of the probe point rq_queue request queue Description Either, when number of pending requests in the queue exceeds threshold or, upon expiration of timer that was activated when queue was plugged. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ioscheduler-trace-unplug-io |
Chapter 1. Introduction to Camel K | Chapter 1. Introduction to Camel K This chapter introduces the concepts, features, and cloud-native architecture provided by Red Hat Integration - Camel K: Section 1.1, "Camel K overview" Section 1.2, "Camel K features" Section 1.2.3, "Kamelets" Section 1.3, "Camel K development tooling" Section 1.4, "Camel K distributions" 1.1. Camel K overview Red Hat Integration - Camel K is a lightweight integration framework built from Apache Camel K that runs natively in the cloud on OpenShift. Camel K is specifically designed for serverless and microservice architectures. You can use Camel K to instantly run your integration code written in Camel Domain Specific Language (DSL) directly on OpenShift. Camel K is a subproject of the Apache Camel open source community: https://github.com/apache/camel-k . Camel K is implemented in the Go programming language and uses the Kubernetes Operator SDK to automatically deploy integrations in the cloud. For example, this includes automatically creating services and routes on OpenShift. This provides much faster turnaround times when deploying and redeploying integrations in the cloud, such as a few seconds or less instead of minutes. The Camel K runtime provides significant performance optimizations. The Quarkus cloud-native Java framework is enabled by default to provide faster start up times, and lower memory and CPU footprints. When running Camel K in developer mode, you can make live updates to your integration DSL and view results instantly in the cloud on OpenShift, without waiting for your integration to redeploy. Using Camel K with OpenShift Serverless and Knative Serving, containers are created only as needed and are autoscaled under load up and down to zero. This reduces cost by removing the overhead of server provisioning and maintenance and enables you to focus on application development instead. Using Camel K with OpenShift Serverless and Knative Eventing, you can manage how components in your system communicate in an event-driven architecture for serverless applications. This provides flexibility and creates efficiencies through decoupled relationships between event producers and consumers using a publish-subscribe or event-streaming model. Additional resources Apache Camel K website Getting started with OpenShift Serverless 1.2. Camel K features The Camel K includes the following main platforms and features: 1.2.1. Platform and component versions OpenShift Container Platform 4.13, 4.14 OpenShift Serverless 1.31.1 Red Hat Build of Quarkus 2.13.8.Final-redhat-00006 Red Hat Camel Extensions for Quarkus 2.13.3.redhat-00008 Apache Camel K 1.10.5.redhat-00002 Apache Camel 3.18.6.redhat-00007 OpenJDK 11 1.2.2. Camel K features Knative Serving for autoscaling and scale-to-zero Knative Eventing for event-driven architectures Performance optimizations using Quarkus runtime by default Camel integrations written in Java or YAML DSL Development tooling with Visual Studio Code Monitoring of integrations using Prometheus in OpenShift Quickstart tutorials Kamelet Catalog of connectors to external systems such as AWS, Jira, and Salesforce The following diagram shows a simplified view of the Camel K cloud-native architecture: Additional resources Apache Camel architecture 1.2.3. Kamelets Kamelets hide the complexity of connecting to external systems behind a simple interface, which contains all the information needed to instantiate them, even for users who are not familiar with Camel. Kamelets are implemented as custom resources that you can install on an OpenShift cluster and use in Camel K integrations. Kamelets are route templates that use Camel components designed to connect to external systems without requiring deep understanding of the component. Kamelets abstract the details of connecting to external systems. You can also combine Kamelets to create complex Camel integrations, just like using standard Camel components. Additional resources Integrating Applications with Kamelets 1.3. Camel K development tooling The Camel K provides development tooling extensions for Visual Studio (VS) Code, Red Hat CodeReady WorkSpaces, and Eclipse Che. The Camel-based tooling extensions include features such as automatic completion of Camel DSL code, Camel K modeline configuration, and Camel K traits. The following VS Code development tooling extensions are available: VS Code Extension Pack for Apache Camel by Red Hat Tooling for Apache Camel K extension Language Support for Apache Camel extension Debug Adapter for Apache Camel K Additional extensions for OpenShift, Java and more For details on how to set up these VS Code extensions for Camel K, see Setting up your Camel K development environment . Important The following plugin VS Code Language support for Camel - a part of the Camel extension pack provides support for content assist when editing Camel routes and application.properties . To install a supported Camel K tooling extension for VS code to create, run and operate Camel K integrations on OpenShift, see VS Code Tooling for Apache Camel K by Red Hat extension To install a supported Camel debug tool extension for VS code to debug Camel integrations written in Java, YAML or XML locally, see Debug Adapter for Apache Camel by Red Hat For details about configurations and components to use the developer tool with specific product versions, see Camel K Supported Configurations and Camel K Component Details Note: The Camel K VS Code extensions are community features. Eclipse Che also provides these features using the vscode-camelk plug-in. For more information about scope of development support, see Development Support Scope of Coverage Additional resources VS Code tooling for Apache Camel K example Eclipse Che tooling for Apache Camel K 1.4. Camel K distributions Table 1.1. Red Hat Integration - Camel K distributions Distribution Description Location Operator image Container image for the Red Hat Integration - Camel K Operator: integration/camel-k-rhel8-operator OpenShift web console under Operators OperatorHub registry.redhat.io Maven repository Maven artifacts for Red Hat Integration - Camel K Red Hat provides Maven repositories that host the content we ship with our products. These repositories are available to download from the software downloads page. For Red Hat Integration - Camel K the following repositories are required: rhi-common rhi-camel-quarkus rhi-camel-k Installation of Red Hat Integration - Camel K in a disconnected environment (offline mode) is not supported. Software downloads of Red Hat build of Apache Camel Source code Source code for Red Hat Integration - Camel K Software downloads of Red Hat build of Apache Camel Quickstarts Quick start tutorials: Basic Java integration Event streaming integration JDBC integration JMS integration Kafka integration Knative integration SaaS integration Serverless API integration Transformations integration https://github.com/openshift-integration Note You must have a subscription for Red Hat build of Apache Camel K and be logged into the Red Hat Customer Portal to access the Red Hat Integration - Camel K distributions. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/getting_started_with_camel_k/introduction-to-camel-k |
Appendix B. Understanding the example configuration files | Appendix B. Understanding the example configuration files B.1. Understanding the luks_tang_inventory.yml file B.1.1. Configuration parameters for disk encryption hc_nodes (required) A list of hyperconverged hosts that uses the back-end FQDN of the host, and the configuration details of those hosts. Configuration that is specific to a host is defined under that host's back-end FQDN. Configuration that is common to all hosts is defined in the vars: section. blacklist_mpath_devices (optional) By default, Red Hat Virtualization Host enables multipath configuration, which provides unique multipath names and worldwide identifiers for all disks, even when disks do not have underlying multipath configuration. Include this section if you do not have multipath configuration so that the multipath device names are not used for listed devices. Disks that are not listed here are assumed to have multipath configuration available, and require the path format /dev/mapper/<WWID> instead of /dev/sdx when defined in subsequent sections of the inventory file. On a server with four devices (sda, sdb, sdc and sdd), the following configuration blacklists only two devices. The path format /dev/mapper/<WWID> is expected for devices not in this list. gluster_infra_luks_devices (required) A list of devices to encrypt and the encryption passphrase to use for each device. devicename The name of the device in the format /dev/sdx . passphrase The password to use for this device when configuring encryption. After disk encryption with Network-Bound Disk Encryption (NBDE) is configured, a new random key is generated, providing greater security. rootpassphrase (required) The password that you used when you selected Encrypt my data during operating system installation on this host. rootdevice (required) The root device that was encrypted when you selected Encrypt my data during operating system installation on this host. networkinterface (required) The network interface this host uses to reach the NBDE key server. ip_version (required) Whether to use IPv4 or IPv6 networking. Valid values are IPv4 and IPv6 . There is no default value. Mixed networks are not supported. ip_config_method (required) Whether to use DHCP or static networking. Valid values are dhcp and static . There is no default value. The other valid value for this option is static , which requires the following additional parameters and is defined individually for each host: gluster_infra_tangservers The address of your NBDE key server or servers, including http:// . If your servers use a port other than the default (80), specify a port by appending :_port_ to the end of the URL. B.1.2. Example luks_tang_inventory.yml Dynamically allocated IP addresses Static IP addresses B.2. Understanding the gluster_inventory.yml file The gluster_inventory.yml file is an example Ansible inventory file that you can use to automate the deployment of Red Hat Hyperconverged Infrastructure for Virtualization using Ansible. The single_node_gluster_inventory.yml is the same as the gluster_inventory.yml file. The only change is in the hosts section as there is only 1 host for a single node deployment. You can find this file at /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/gluster_inventory.yml on any hyperconverged host. B.2.1. Default host groups The gluster_inventory.yml example file defines two host groups and their configuration in the YAML format. You can use these host groups directly if you want all nodes to host all storage domains. hc_nodes A list of hyperconverged hosts that uses the back-end FQDN of the host, and the configuration details of those hosts. Configuration that is specific to a host is defined under that host's back-end FQDN. Configuration that is common to all hosts is defined in the vars: section. gluster A list of hosts that uses the front-end FQDN of the host. These hosts serve as additional storage domain access points, so this list of nodes does not include the first host. If you want all nodes to host all storage domains, place storage_domains: and all storage domain definitions under the vars: section. B.2.2. Configuration parameters for hyperconverged nodes B.2.2.1. Multipath devices blacklist_mpath_devices (optional) By default, Red Hat Virtualization Host enables multipath configuration, which provides unique multipath names and worldwide identifiers for all disks, even when disks do not have underlying multipath configuration. Include this section if you do not have multipath configuration so that the multipath device names are not used for listed devices. Disks that are not listed here are assumed to have multipath configuration available, and require the path format /dev/mapper/<WWID> instead of /dev/sdx when defined in subsequent sections of the inventory file. On a server with four devices ( sda , sdb , sdc and sdd ), the following configuration blacklists only two devices. The path format /dev/mapper/<WWID> is expected for devices not in this list. Important Do not list encrypted devices ( luks_* devices) in blacklist_mpath_devices , as they require multipath configuration to work. B.2.2.2. Deduplication and compression gluster_infra_vdo (optional) Include this section to define a list of devices to use deduplication and compression. These devices require the /dev/mapper/<name> path format when you define them as volume groups in gluster_infra_volume_groups . Each device listed must have the following information: name A short name for the VDO device, for example vdo_sdc . device The device to use, for example, /dev/sdc . logicalsize The logical size of the VDO volume. Set this to ten times the size of the physical disk, for example, if you have a 500 GB disk, set logicalsize: '5000G' . emulate512 If you use devices with a 4 KB block size, set this to on . slabsize If the logical size of the volume is 1000 GB or larger, set this to 32G . If the logical size is smaller than 1000 GB, set this to 2G . blockmapcachesize Set this to 128M . writepolicy Set this to auto . For example: B.2.2.3. Cluster definition cluster_nodes (required) Defines the list of nodes that are part of the cluster, using the back-end FQDN for each node and creates the cluster. gluster_features_hci_cluster (required) Identifies cluster_nodes as part of a hyperconverged cluster. gluster_features_hci_volumes (required) Defines the layout of the Gluster volumes across the hyperconverged nodes. volname The name of the Gluster volume to create. brick The location at which to create the brick. arbiter Set to 1 for arbitrated volumes and 0 for a fully replicated volume. servers The list of back-end FQDN addresses for the hosts on which to create bricks for this volume. There are two format options for this parameter. Only one of these formats is supported per deployment. Format 1: Creates bricks for the specified volumes across all hosts Format 2: Creates bricks for the specified volumes on specified hosts B.2.2.4. Storage infrastructure gluster_infra_volume_groups (required) This section creates the volume groups that contain the logical volumes. gluster_infra_mount_devices (required) This section creates the logical volumes that form Gluster bricks. gluster_infra_thinpools (optional) This section defines logical thin pools for use by thinly provisioned volumes. Thin pools are not suitable for the engine volume, but can be used for the vmstore and data volume bricks. vgname The name of the volume group that contains this thin pool. thinpoolname A name for the thin pool, for example, gluster_thinpool_sdc . thinpoolsize The sum of the sizes of all logical volumes to be created in this volume group. poolmetadatasize Set to 16G ; this is the recommended size for supported deployments. gluster_infra_cache_vars (optional) This section defines cache logical volumes to improve performance for slow devices. A fast cache device is attached to a thin pool, and requires gluster_infra_thinpool to be defined. vgname The name of a volume group with a slow device that requires a fast external cache. cachedisk The paths of the slow and fast devices, separated with a comma, for example, to use a cache device sde with the slow device sdb , specify /dev/sdb,/dev/sde . cachelvname A name for this cache logical volume. cachethinpoolname The thin pool to which the fast cache volume is attached. cachelvsize The size of the cache logical volume. Around 0.01% of this size is used for cache metadata. cachemode The cache mode. Valid values are writethrough and writeback . gluster_infra_thick_lvs (required) The thickly provisioned logical volumes that are used to create bricks. Bricks for the engine volume must be thickly provisioned. vgname The name of the volume group that contains the logical volume. lvname The name of the logical volume. size The size of the logical volume. The engine logical volume requires 100G . gluster_infra_lv_logicalvols (required) The thinly provisioned logical volumes that are used to create bricks. vgname The name of the volume group that contains the logical volume. thinpool The thin pool that contains the logical volume, if this volume is thinly provisioned. lvname The name of the logical volume. size The size of the logical volume. The engine logical volume requires 100G . gluster_infra_disktype (required) Specifies the underlying hardware configuration of the disks. Set this to the value that matches your hardware: RAID6 , RAID5 , or JBOD . gluster_infra_diskcount (required) Specifies the number of data disks in the RAID set. For a JBOD disk type, set this to 1 . gluster_infra_stripe_unit_size (required) The stripe size of the RAID set in megabytes. gluster_features_force_varlogsizecheck (required) Set this to true if you want to verify that your /var/log partition has sufficient free space during the deployment process. It is important to have sufficient space for logs, but it is not required to verify space requirements at deployment time if you plan to monitor space requirements carefully. gluster_set_selinux_labels (required) Ensures that volumes can be accessed when SELinux is enabled. Set this to true if SELinux is enabled on this host. Recommendation for LV size Logical volume for engine brick must be a thick LV of size 100GB, other bricks created as thin LV reserving 16GB for thinpool metadata and 16GB reserved for spare metadata. Example: Other bricks for volumes can be created with the available thinpool storage space of 868GB, for example, vmstore brick with 200GB and data brick with 668GB. B.2.2.5. Firewall and network infrastructure gluster_infra_fw_ports (required) A list of ports to open between all nodes, in the format <port>/<protocol> . gluster_infra_fw_permanent (required) Ensures the ports listed in gluster_infra_fw_ports are open after nodes are rebooted. Set this to true for production use cases. gluster_infra_fw_state (required) Enables the firewall. Set this to enabled for production use cases. gluster_infra_fw_zone (required) Specifies the firewall zone to which these gluster_infra_fw_\* parameters are applied. gluster_infra_fw_services (required) A list of services to allow through the firewall. Ensure glusterfs is defined here. B.2.2.6. Storage domains storage_domains (required) Creates the specified storage domains. name The name of the storage domain to create. host The front-end FQDN of the first host. Do not use the IP address. address The back-end FQDN address of the first host. Do not use the IP address. path The path of the Gluster volume that provides the storage domain. function Set this to data ; this is the only supported type of storage domain. mount_options Specifies additional mount options. The backup-volfile-servers option is required to specify the other hosts that provide the volume. The xlator-option='transport.address-family=inet6' option is required for IPv6 configurations. IPv4 configuration IPv6 configuration B.2.3. Example gluster_inventory.yml file B.3. Understanding the he_gluster_vars.json file The he_gluster_vars.json file is an example Ansible variable file. The variables in this file need to be defined in order to deploy Red Hat Hyperconverged Infrastructure for Virtualization. You can find an example file at /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he_gluster_vars.json on any hyperconverged host. Example he_gluster_vars.json file Red Hat recommends encrypting this file. See Working with files encrypted using Ansible Vault for more information. B.3.1. Required variables he_appliance_password The password for the hosted engine. For a production cluster, use an encrypted value created with Ansible Vault. he_admin_password The password for the admin account of the hosted engine. For a production cluster, use an encrypted value created with Ansible Vault. he_domain_type The type of storage domain. Set to glusterfs . he_fqdn The FQDN for the hosted engine virtual machine. he_vm_mac_addr The MAC address for the appropriate network device of the hosted engine virtual machine. You can skip this option for hosted deployment with static IP configuration as in such cases the MAC address for Hosted Engine is automatically generated. he_default_gateway The FQDN of the gateway to be used. he_mgmt_network The name of the management network. Set to ovirtmgmt . he_storage_domain_name The name of the storage domain to create for the hosted engine. Set to HostedEngine . he_storage_domain_path The path of the Gluster volume that provides the storage domain. Set to /engine . he_storage_domain_addr The back-end FQDN of the first host providing the engine domain. he_mount_options Specifies additional mount options. The he_mount_option is not required for IPv4 based single node deployment of Red Hat Hyperconverged Infrastructure for Virtualization. For a three node deployment with IPv6 configurations, set: For a single node deployment with IPv6 configurations, set: he_bridge_if The name of the interface to use for bridge creation. he_enable_hc_gluster_service Enables Gluster services. Set to true . he_mem_size_MB The amount of memory allocated to the hosted engine virtual machine in megabytes. he_cluster The name of the cluster in which the hyperconverged hosts are placed. he_vcpus The amount of CPUs used on the engine VM. By default 4 VCPUs are allocated for Hosted Engine Virtual Machine. B.3.2. Required variables for static network configurations DHCP configuration is used on the Hosted Engine VM by default. However, if you want to use static IP or FQDN, define the following variables: he_vm_ip_addr Static IP address for Hosted Engine VM (IPv4 or IPv6). he_vm_ip_prefix IP prefix for Hosted Engine VM (IPv4 or IPv6). he_dns_addr DNS server for Hosted Engine VM (IPv4 or IPv6). he_default_gateway Default gateway for Hosted Engine VM (IPv4 or IPv6). he_vm_etc_hosts Specifies Hosted Engine VM IP address and FQDN to /etc/hosts on the host, boolean value. Example he_gluster_vars.json file with static Hosted Engine configuration Note If DNS is not available, use ping for he_network_test instead of dns . | [
"hc_nodes: hosts: host1backend.example.com: [configuration specific to this host] host2backend.example.com: host3backend.example.com: host4backend.example.com: host5backend.example.com: host6backend.example.com: vars: [configuration common to all hosts]",
"hc_nodes: hosts: host1backend.example.com: blacklist_mpath_devices: - sdb - sdc",
"hc_nodes: hosts: host1backend.example.com: gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: Str0ngPa55#",
"hc_nodes: hosts: host1backend.example.com: rootpassphrase: h1-Str0ngPa55#",
"hc_nodes: hosts: host1backend.example.com: rootdevice: /dev/sda2",
"hc_nodes: hosts: host1backend.example.com: networkinterface: ens3s0f0",
"hc_nodes: vars: ip_version: IPv4",
"hc_nodes: vars: ip_config_method: dhcp",
"hc_nodes: hosts: host1backend.example.com : ip_config_method: static host_ip_addr: 192.168.1.101 host_ip_prefix: 24 host_net_gateway: 192.168.1.100 host2backend.example.com : ip_config_method: static host_ip_addr: 192.168.1.102 host_ip_prefix: 24 host_net_gateway: 192.168.1.100 host3backend.example.com : ip_config_method: static host_ip_addr: 192.168.1.102 host_ip_prefix: 24 host_net_gateway: 192.168.1.100",
"hc_nodes: vars: gluster_infra_tangservers: - url: http:// key-server1.example.com - url: http:// key-server2.example.com : 80",
"hc_nodes: hosts: host1-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host1-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host2-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host2-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host3-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host3-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 vars: ip_version: IPv4 ip_config_method: dhcp gluster_infra_tangservers: - url: http:// key-server1.example.com :80 - url: http:// key-server2.example.com :80",
"hc_nodes: hosts: host1-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host1-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host_ip_addr: host1-static-ip host_ip_prefix: network-prefix host_net_gateway: default-network-gateway host2-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host2-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host_ip_addr: host1-static-ip host_ip_prefix: network-prefix host_net_gateway: default-network-gateway host3-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host3-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host_ip_addr: host1-static-ip host_ip_prefix: network-prefix host_net_gateway: default-network-gateway vars: ip_version: IPv4 ip_config_method: static gluster_infra_tangservers: - url: http:// key-server1.example.com :80 - url: http:// key-server2.example.com :80",
"hc_nodes: hosts: host1backend.example.com: [configuration specific to this host] host2backend.example.com: host3backend.example.com: host4backend.example.com: host5backend.example.com: host6backend.example.com: vars: [configuration common to all hosts]",
"gluster: hosts: host2frontend.example.com: host3frontend.example.com: host4frontend.example.com: host5frontend.example.com: host6frontend.example.com: vars: storage_domains: [storage domain definitions common to all hosts]",
"hc_nodes: hosts: host1backend.example.com: blacklist_mpath_devices: - sdb - sdc",
"hc_nodes: hosts: host1backend.example.com: gluster_infra_vdo: - { name: 'vdo_sdc', device: '/dev/sdc', logicalsize: '5000G', emulate512: 'off', slabsize: '32G', blockmapcachesize: '128M', writepolicy: 'auto' } - { name: 'vdo_sdd', device: '/dev/sdd', logicalsize: '500G', emulate512: 'off', slabsize: '2G', blockmapcachesize: '128M', writepolicy: 'auto' }",
"hc_nodes: vars: cluster_nodes: - host1backend.example.com - host2backend.example.com - host3backend.example.com",
"hc_nodes: vars: gluster_features_hci_cluster: \"{{ cluster_nodes }}\"",
"hc_nodes: vars: gluster_features_hci_volumes: - volname: engine brick: /gluster_bricks/engine/engine arbiter: 0 - volname: data brick: /gluster_bricks/data1/data1,/gluster_bricks/data2/data2 arbiter: 0 - volname: vmstore brick: /gluster_bricks/vmstore/vmstore arbiter: 0",
"hc_nodes: vars: gluster_features_hci_volumes: - volname: data brick: /gluster_bricks/data/data arbiter: 0 servers: - host4backend.example.com - host5backend.example.com - host6backend.example.com - host7backend.example.com - host8backend.example.com - host9backend.example.com - volname: vmstore brick: /gluster_bricks/vmstore/vmstore arbiter: 0 servers: - host1backend.example.com - host2backend.example.com - host3backend.example.com",
"hc_nodes: hosts: host1backend.example.com: gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb - vgname: gluster_vg_sdc pvname: /dev/mapper/vdo_sdc",
"hc_nodes: hosts: host1backend.example.com: gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_sdc - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_sdd",
"hc_nodes: hosts: host1backend.example.com: gluster_infra_thinpools: - {vgname: 'gluster_vg_sdc', thinpoolname: 'gluster_thinpool_sdc', thinpoolsize: '500G', poolmetadatasize: '16G'} - {vgname: 'gluster_vg_sdd', thinpoolname: 'gluster_thinpool_sdd', thinpoolsize: '500G', poolmetadatasize: '16G'}",
"hc_nodes: hosts: host1backend.example.com: gluster_infra_cache_vars: - vgname: gluster_vg_sdb cachedisk: /dev/sdb,/dev/sde cachelvname: cachelv_thinpool_sdb cachethinpoolname: gluster_thinpool_sdb cachelvsize: '250G' cachemode: writethrough",
"hc_nodes: hosts: host1backend.example.com: gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 100G",
"hc_nodes: hosts: host1backend.example.com: gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdc thinpool: gluster_thinpool_sdc lvname: gluster_lv_data lvsize: 200G - vgname: gluster_vg_sdd thinpool: gluster_thinpool_sdd lvname: gluster_lv_vmstore lvsize: 200G",
"hc_nodes: vars: gluster_infra_disktype: RAID6",
"hc_nodes: vars: gluster_infra_diskcount: 10",
"hc_nodes: vars: gluster_infra_stripe_unit_size: 256",
"hc_nodes: vars: gluster_features_force_varlogsizecheck: false",
"hc_nodes: vars: gluster_set_selinux_labels: true",
"If the host has a disk of size 1TB, then engine brick size= 100GB ( thick LV ) Pool metadata size= 16GB Spare metadata size= 16GB Available space for thinpool= 1TB - ( 100GB + 16GB + 16GB ) = 868 GB",
"hc_nodes: vars: gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900-6923/tcp - 16514/tcp - 5666/tcp - 16514/tcp",
"hc_nodes: vars: gluster_infra_fw_permanent: true",
"hc_nodes: vars: gluster_infra_fw_state: enabled",
"hc_nodes: vars: gluster_infra_fw_zone: public",
"hc_nodes: vars: gluster_infra_fw_services: - glusterfs",
"gluster: vars: storage_domains: - {\"name\":\"data\",\"host\":\"host1-frontend-network-FQDN\",\"address\":\"host1-backend-network-FQDN\",\"path\":\"/data\",\"function\":\"data\",\"mount_options\":\"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN\"} - {\"name\":\"vmstore\",\"host\":\"host1-frontend-network-FQDN\",\"address\":\"host1-backend-network-FQDN\",\"path\":\"/vmstore\",\"function\":\"data\",\"mount_options\":\"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN\"}",
"gluster: vars: storage_domains: - {\"name\":\"data\",\"host\":\"host1-frontend-network-FQDN\",\"address\":\"host1-backend-network-FQDN\",\"path\":\"/data\",\"function\":\"data\",\"mount_options\":\"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN,xlator-option='transport.address-family=inet6'\"} - {\"name\":\"vmstore\",\"host\":\"host1-frontend-network-FQDN\",\"address\":\"host1-backend-network-FQDN\",\"path\":\"/vmstore\",\"function\":\"data\",\"mount_options\":\"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN,xlator-option='transport.address-family=inet6'\"}",
"hc_nodes: hosts: # Host1 <host1-backend-network-FQDN>: # Blacklist multipath devices which are used for gluster bricks # If you omit blacklist_mpath_devices it means all device will be whitelisted. # If the disks are not blacklisted, and then its taken that multipath configuration # exists in the server and one should provide /dev/mapper/<WWID> instead of /dev/sdx blacklist_mpath_devices: - sdb - sdc # Enable this section 'gluster_infra_vdo', if dedupe & compression is # required on that storage volume. # The variables refers to: # name - VDO volume name to be used # device - Disk name on which VDO volume to created # logicalsize - Logical size of the VDO volume.This value is 10 times # the size of the physical disk # emulate512 - VDO device is made as 4KB block sized storage volume(4KN) # slabsize - VDO slab size. If VDO logical size >= 1000G then # slabsize is 32G else slabsize is 2G # # Following VDO values are as per recommendation and treated as constants: # blockmapcachesize - 128M # writepolicy - auto # # gluster_infra_vdo: # - { name: 'vdo_sdc', device: '/dev/sdc', logicalsize: '5000G', emulate512: 'off', slabsize: '32G', # blockmapcachesize: '128M', writepolicy: 'auto' } # - { name: 'vdo_sdd', device: '/dev/sdd', logicalsize: '3000G', emulate512: 'off', slabsize: '32G', # blockmapcachesize: '128M', writepolicy: 'auto' } # When dedupe and compression is enabled on the device, # use pvname for that device as '/dev/mapper/<vdo_device_name> # # The variables refers to: # vgname - VG to be created on the disk # pvname - Physical disk (/dev/sdc) or VDO volume (/dev/mapper/vdo_sdc) gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb - vgname: gluster_vg_sdc pvname: /dev/mapper/vdo_sdc - vgname: gluster_vg_sdd pvname: /dev/mapper/vdo_sdd gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_sdc - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_sdd # 'thinpoolsize' is the sum of sizes of all LVs to be created on that VG # In the case of VDO enabled, 'thinpoolsize' is 10 times the sum of sizes # of all LVs to be created on that VG. Recommended values for # 'poolmetadatasize' is 16GB and that should be considered exclusive of # 'thinpoolsize' gluster_infra_thinpools: - {vgname: 'gluster_vg_sdc', thinpoolname: 'gluster_thinpool_sdc', thinpoolsize: '500G', poolmetadatasize: '16G'} - {vgname: 'gluster_vg_sdd', thinpoolname: 'gluster_thinpool_sdd', thinpoolsize: '500G', poolmetadatasize: '16G'} # Enable the following section if LVM cache is to enabled # Following are the variables: # vgname - VG with the slow HDD device that needs caching # cachedisk - Comma separated value of slow HDD and fast SSD # In this example, /dev/sdb is the slow HDD, /dev/sde is fast SSD # cachelvname - LV cache name # cachethinpoolname - Thinpool to which the fast SSD to be attached # cachelvsize - Size of cache data LV. This is the SSD_size - (1/1000) of SSD_size # 1/1000th of SSD space will be used by cache LV meta # cachemode - writethrough or writeback # gluster_infra_cache_vars: # - vgname: gluster_vg_sdb # cachedisk: /dev/sdb,/dev/sde # cachelvname: cachelv_thinpool_sdb # cachethinpoolname: gluster_thinpool_sdb # cachelvsize: '250G' # cachemode: writethrough # Only the engine brick needs to be thickly provisioned # Engine brick requires 100GB of disk space gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 100G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdc thinpool: gluster_thinpool_sdc lvname: gluster_lv_data lvsize: 200G - vgname: gluster_vg_sdd thinpool: gluster_thinpool_sdd lvname: gluster_lv_vmstore lvsize: 200G #Host2 <host2-backend-network-FQDN>: # Blacklist multipath devices which are used for gluster bricks # If you omit blacklist_mpath_devices it means all device will be whitelisted. # If the disks are not blacklisted, and then its taken that multipath configuration # exists in the server and one should provide /dev/mapper/<WWID> instead of /dev/sdx blacklist_mpath_devices: - sdb - sdc # Enable this section 'gluster_infra_vdo', if dedupe & compression is # required on that storage volume. # The variables refers to: # name - VDO volume name to be used # device - Disk name on which VDO volume to created # logicalsize - Logical size of the VDO volume.This value is 10 times # the size of the physical disk # emulate512 - VDO device is made as 4KB block sized storage volume(4KN) # slabsize - VDO slab size. If VDO logical size >= 1000G then # slabsize is 32G else slabsize is 2G # # Following VDO values are as per recommendation and treated as constants: # blockmapcachesize - 128M # writepolicy - auto # # gluster_infra_vdo: # - { name: 'vdo_sdc', device: '/dev/sdc', logicalsize: '5000G', emulate512: 'off', slabsize: '32G', # blockmapcachesize: '128M', writepolicy: 'auto' } # - { name: 'vdo_sdd', device: '/dev/sdd', logicalsize: '3000G', emulate512: 'off', slabsize: '32G', # blockmapcachesize: '128M', writepolicy: 'auto' } # When dedupe and compression is enabled on the device, # use pvname for that device as '/dev/mapper/<vdo_device_name> # # The variables refers to: # vgname - VG to be created on the disk # pvname - Physical disk (/dev/sdc) or VDO volume (/dev/mapper/vdo_sdc) gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb - vgname: gluster_vg_sdc pvname: /dev/mapper/vdo_sdc - vgname: gluster_vg_sdd pvname: /dev/mapper/vdo_sdd gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_sdc - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_sdd # 'thinpoolsize' is the sum of sizes of all LVs to be created on that VG # In the case of VDO enabled, 'thinpoolsize' is 10 times the sum of sizes # of all LVs to be created on that VG. Recommended values for # 'poolmetadatasize' is 16GB and that should be considered exclusive of # 'thinpoolsize' gluster_infra_thinpools: - {vgname: 'gluster_vg_sdc', thinpoolname: 'gluster_thinpool_sdc', thinpoolsize: '500G', poolmetadatasize: '16G'} - {vgname: 'gluster_vg_sdd', thinpoolname: 'gluster_thinpool_sdd', thinpoolsize: '500G', poolmetadatasize: '16G'} # Enable the following section if LVM cache is to enabled # Following are the variables: # vgname - VG with the slow HDD device that needs caching # cachedisk - Comma separated value of slow HDD and fast SSD # In this example, /dev/sdb is the slow HDD, /dev/sde is fast SSD # cachelvname - LV cache name # cachethinpoolname - Thinpool to which the fast SSD to be attached # cachelvsize - Size of cache data LV. This is the SSD_size - (1/1000) of SSD_size # 1/1000th of SSD space will be used by cache LV meta # cachemode - writethrough or writeback # gluster_infra_cache_vars: # - vgname: gluster_vg_sdb # cachedisk: /dev/sdb,/dev/sde # cachelvname: cachelv_thinpool_sdb # cachethinpoolname: gluster_thinpool_sdb # cachelvsize: '250G' # cachemode: writethrough # Only the engine brick needs to be thickly provisioned # Engine brick requires 100GB of disk space gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 100G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdc thinpool: gluster_thinpool_sdc lvname: gluster_lv_data lvsize: 200G - vgname: gluster_vg_sdd thinpool: gluster_thinpool_sdd lvname: gluster_lv_vmstore lvsize: 200G #Host3 <host3-backend-network-FQDN>: # Blacklist multipath devices which are used for gluster bricks # If you omit blacklist_mpath_devices it means all device will be whitelisted. # If the disks are not blacklisted, and then its taken that multipath configuration # exists in the server and one should provide /dev/mapper/<WWID> instead of /dev/sdx blacklist_mpath_devices: - sdb - sdd # Enable this section 'gluster_infra_vdo', if dedupe & compression is # required on that storage volume. # The variables refers to: # name - VDO volume name to be used # device - Disk name on which VDO volume to created # logicalsize - Logical size of the VDO volume.This value is 10 times # the size of the physical disk # emulate512 - VDO device is made as 4KB block sized storage volume(4KN) # slabsize - VDO slab size. If VDO logical size >= 1000G then # slabsize is 32G else slabsize is 2G # # Following VDO values are as per recommendation and treated as constants: # blockmapcachesize - 128M # writepolicy - auto # # gluster_infra_vdo: # - { name: 'vdo_sdc', device: '/dev/sdc', logicalsize: '5000G', emulate512: 'off', slabsize: '32G', # blockmapcachesize: '128M', writepolicy: 'auto' } # - { name: 'vdo_sdd', device: '/dev/sdd', logicalsize: '3000G', emulate512: 'off', slabsize: '32G', # blockmapcachesize: '128M', writepolicy: 'auto' } # When dedupe and compression is enabled on the device, # use pvname for that device as '/dev/mapper/<vdo_device_name> # # The variables refers to: # vgname - VG to be created on the disk # pvname - Physical disk (/dev/sdc) or VDO volume (/dev/mapper/vdo_sdc) gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb - vgname: gluster_vg_sdc pvname: /dev/mapper/vdo_sdc - vgname: gluster_vg_sdd pvname: /dev/mapper/vdo_sdd gluster_infra_mount_devices: - path: /gluster_bricks/engine lvname: gluster_lv_engine vgname: gluster_vg_sdb - path: /gluster_bricks/data lvname: gluster_lv_data vgname: gluster_vg_sdc - path: /gluster_bricks/vmstore lvname: gluster_lv_vmstore vgname: gluster_vg_sdd # 'thinpoolsize' is the sum of sizes of all LVs to be created on that VG # In the case of VDO enabled, 'thinpoolsize' is 10 times the sum of sizes # of all LVs to be created on that VG. Recommended values for # 'poolmetadatasize' is 16GB and that should be considered exclusive of # 'thinpoolsize' gluster_infra_thinpools: - {vgname: 'gluster_vg_sdc', thinpoolname: 'gluster_thinpool_sdc', thinpoolsize: '500G', poolmetadatasize: '16G'} - {vgname: 'gluster_vg_sdd', thinpoolname: 'gluster_thinpool_sdd', thinpoolsize: '500G', poolmetadatasize: '16G'} # Enable the following section if LVM cache is to enabled # Following are the variables: # vgname - VG with the slow HDD device that needs caching # cachedisk - Comma separated value of slow HDD and fast SSD # In this example, /dev/sdb is the slow HDD, /dev/sde is fast SSD # cachelvname - LV cache name # cachethinpoolname - Thinpool to which the fast SSD to be attached # cachelvsize - Size of cache data LV. This is the SSD_size - (1/1000) of SSD_size # 1/1000th of SSD space will be used by cache LV meta # cachemode - writethrough or writeback # gluster_infra_cache_vars: # - vgname: gluster_vg_sdb # cachedisk: /dev/sdb,/dev/sde # cachelvname: cachelv_thinpool_sdb # cachethinpoolname: gluster_thinpool_sdb # cachelvsize: '250G' # cachemode: writethrough # Only the engine brick needs to be thickly provisioned # Engine brick requires 100GB of disk space gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 100G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdc thinpool: gluster_thinpool_sdc lvname: gluster_lv_data lvsize: 200G - vgname: gluster_vg_sdd thinpool: gluster_thinpool_sdd lvname: gluster_lv_vmstore lvsize: 200G # Common configurations vars: # In case of IPv6 based deployment \"gluster_features_enable_ipv6\" needs to be enabled,below line needs to be uncommented, like: # gluster_features_enable_ipv6: true # Add the required hosts in the cluster. It can be 3,6,9 or 12 hosts cluster_nodes: - <host1-backend-network-FQDN> - <host2-backend-network-FQDN> - <host3-backend-network-FQDN> gluster_features_hci_cluster: \"{{ cluster_nodes }}\" # Create Gluster volumes for hyperconverged setup in 2 formats # format-1: Create bricks for gluster 1x3 replica volumes by default # on the first 3 hosts # format-2: Create bricks on the specified hosts, and it can create # nx3 distributed-replicated or distributed arbitrated # replicate volumes # Note: format-1 and format-2 are mutually exclusive (ie) either # format-1 or format-2 to be used. Don't mix the formats for # different volumes # Format-1 - Creates gluster 1x3 replicate or arbitrated replicate volume # - engine, vmstore, data with bricks on first 3 hosts gluster_features_hci_volumes: - volname: engine brick: /gluster_bricks/engine/engine arbiter: 0 - volname: data brick: /gluster_bricks/data/data arbiter: 0 - volname: vmstore brick: /gluster_bricks/vmstore/vmstore arbiter: 0 # Format-2 - Allows to create nx3 volumes, with bricks on specified host #gluster_features_hci_volumes: # - volname: engine # brick: /gluster_bricks/engine/engine # arbiter: 0 # servers: # - host1 # - host2 # - host3 # # # Following creates 2x3 'Data' gluster volume with bricks on host4, # # host5, host6, host7, host8, host9 # - volname: data # brick: /gluster_bricks/data/data # arbiter: 0 # servers: # - host4 # - host5 # - host6 # - host7 # - host8 # - host9 # # # Following creates 2x3 'vmstore' gluster volume with 2 bricks for # # each host # - volname: vmstore # brick: /gluster_bricks/vmstore1/vmstore1,/gluster_bricks/vmstore2/vmstore2 # arbiter: 0 # servers: # - host1 # - host2 # - host3 # Firewall setup gluster_infra_fw_ports: - 2049/tcp - 54321/tcp - 5900-6923/tcp - 16514/tcp - 5666/tcp - 16514/tcp gluster_infra_fw_permanent: true gluster_infra_fw_state: enabled gluster_infra_fw_zone: public gluster_infra_fw_services: - glusterfs # Allowed values for 'gluster_infra_disktype' - RAID6, RAID5, JBOD gluster_infra_disktype: RAID6 # 'gluster_infra_diskcount' is the number of data disks in the RAID set. # Note for JBOD its 1 gluster_infra_diskcount: 10 gluster_infra_stripe_unit_size: 256 gluster_features_force_varlogsizecheck: false gluster_set_selinux_labels: true ## Auto add hosts vars gluster: hosts: <host2-frontend-network-FQDN>: <host3-frontend-network-FQDN>: vars: storage_domains: - {\"name\":\"data\",\"host\":\"host1-frontend-network-FQDN\",\"address\":\"host1-backend-network-FQDN\",\"path\":\"/data\",\"function\":\"data\",\"mount_options\":\"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN\"} - {\"name\":\"vmstore\",\"host\":\"host1-frontend-network-FQDN\",\"address\":\"host1-backend-network-FQDN\",\"path\":\"/vmstore\",\"function\":\"data\",\"mount_options\":\"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN\"} In case of IPv6 based deployment there is additional mount option required i.e. xlator-option=\"transport.address-family=inet6\", below needs to be replaced with above one. Ex: #storage_domains: #- {\"name\":\"data\",\"host\":\"host1-frontend-network-FQDN\",\"address\":\"host1-backend-network-FQDN\",\"path\":\"/data\",\"function\":\"data\",\"mount_options\":\"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN,xlator-option=\"transport.address-family=inet6\"\"} #- {\"name\":\"vmstore\",\"host\":\"host1-frontend-network-FQDN\",\"address\":\"host1-backend-network-FQDN\",\"path\":\"/vmstore\",\"function\":\"data\",\"mount_options\":\"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN,xlator-option=\"transport.address-family=inet6\"\"}",
"{ \"he_appliance_password\": \"encrypt-password-using-ansible-vault\", \"he_admin_password\": \"UI-password-for-login\", \"he_domain_type\": \"glusterfs\", \"he_fqdn\": \"FQDN-for-Hosted-Engine\", \"he_vm_mac_addr\": \"Valid MAC address\", \"he_default_gateway\": \"Valid Gateway\", \"he_mgmt_network\": \"ovirtmgmt\", \"he_storage_domain_name\": \"HostedEngine\", \"he_storage_domain_path\": \"/engine\", \"he_storage_domain_addr\": \"host1-backend-network-FQDN\", \"he_mount_options\": \"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN\", \"he_bridge_if\": \"interface name for bridge creation\", \"he_enable_hc_gluster_service\": true, \"he_mem_size_MB\": \"16384\", \"he_cluster\": \"Default\", \"he_vcpus\": \"4\" }",
"For a three node deployment with IPv4 configurations, set:",
"\"he_mount_options\":\"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN\"",
"\"he_mount_options\":\"backup-volfile-servers=host2-backend-network-FQDN:host3-backend-network-FQDN\",xlator-option='transport.address-family=inet6'\"",
"\"he_mount_options\":\"xlator-option='transport.address-family=inet6'\"",
"{ \"he_appliance_password\": \"mybadappliancepassword\", \"he_admin_password\": \"mybadadminpassword\", \"he_domain_type\": \"glusterfs\", \"he_fqdn\": \"engine.example.com\", \"he_vm_mac_addr\": \"00:01:02:03:04:05\", \"he_default_gateway\": \"gateway.example.com\", \"he_mgmt_network\": \"ovirtmgmt\", \"he_storage_domain_name\": \"HostedEngine\", \"he_storage_domain_path\": \"/engine\", \"he_storage_domain_addr\": \"host1-backend.example.com\", \"he_mount_options\": \"backup-volfile-servers=host2-backend.example.com:host3-backend.example.com\", \"he_bridge_if\": \"interface name for bridge creation\", \"he_enable_hc_gluster_service\": true, \"he_mem_size_MB\": \"16384\", \"he_cluster\": \"Default\", \"he_vm_ip_addr\": \"10.70.34.43\", \"he_vm_ip_prefix\": \"24\", \"he_dns_addr\": \"10.70.34.6\", \"he_default_gateway\": \"10.70.34.255\", \"he_vm_etc_hosts\": \"false\", \"he_network_test\": \"ping\" }",
"Example: \"he_network_test\": \"ping\""
] | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/automating_rhhi_for_virtualization_deployment/understanding_the_example_configuration_files |
Chapter 13. Trimming the replication changelog | Chapter 13. Trimming the replication changelog The Directory Server changelog manages a list of received and processed changes. It includes client changes and changes received from replication partners. By default, Directory Server trims the changelog entries that are older than seven day. However, you can configure: A maximum age of entries in the changelog in the nsslapd-changelogmaxage parameter. The total number of records in the changelog in the nsslapd-changelogmaxentries parameter. If you enabled at least one of these settings, Directory Server trims the changelog every five minutes by default ( nsslapd-changelogtrim-interval ). Even with the trimming settings enabled, any record and records subsequently created remain in the changelog until they are successfully replicated to all servers in the topology. If you remove the supplier from the topology as described in Removing a supplier from a replication topology , then Directory Server trims all the updates of this supplier from changelogs on other servers. 13.1. Configuring replication changelog trimming using the command line Directory Server trims the changelog entries that are older than seven days by default. However, you can configure the time after which Directory Server removes entries. You can also configure Directory Server to automatically remove entries if the number of entries exceeds a configured value. This section describes how to configure changelog trimming for the dc=example,dc=com suffix. Note Red Hat recommends setting a maximum age instead of a maximum number of entries. The maximum age should match the replication purge delay set in the nsDS5ReplicaPurgeDelay parameter in the cn=replica,cn=suffixDN,cn=mapping tree,cn=config entry. Perform this procedure on the supplier. Prerequisites You enabled replication for the dc=example,dc=com suffix. Procedure Configure change log trimming: To set a maximum age of changelog entries, enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com replication set-changelog --suffix " dc=example,dc=com " --max-age " 4w " This command sets the maximum age to 4 weeks. The parameter supports the following units: s ( S ) for seconds m ( M ) for minutes h ( H ) for hours d ( D ) for days w ( W ) for weeks To set a maximum number of entries, enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com replication set-changelog --suffix " dc=example,dc=com " --max-entries " 100000 " This command sets the maximum number of entries in the changelog to 100,000. By default, Directory Server trims the changelog every 5 minutes (300 seconds). To set a different interval, enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com replication set-changelog --suffix " dc=example,dc=com " --trim-interval 600 This command sets the interval to 10 minutes (600 seconds). Verification Display the changelog settings of the suffix: # dsconf -D " cn=Directory Manager " ldap://server.example.com replication get-changelog --suffix " dc=example,dc=com " dn: cn=changelog, cn=userroot ,cn=ldbm database,cn=plugins,cn=config cn: changelog nsslapd-changelogmaxage: 4w nsslapd-changelogtrim-interval: 600 ... The command only displays the parameters that are different to their default. 13.2. Manually reducing the size of a large changelog In certain situations, such as if replication changelog trimming was not enabled, the changelog can grow to an excessively large size. To fix this, you can reduce the changelog size manually. This procedure describes how to trim the changelog of the dc=example,dc=com suffix. Perform this procedure on the supplier. Prerequisites You enabled replication for the dc=example,dc=com suffix. Procedure Optional: Display the size of the changelog: Identify the back-end database of the dc=example,dc=com suffix: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend suffix list dc=example,dc=com ( userroot ) The name in parentheses is the back-end database that stores the data of the corresponding suffix. Display the size of the changelog file of the userroot backend: # ls -lh /var/lib/dirsrv/slapd- instance_name /db/ userroot /replication_changelog.db -rw-------. 1 dirsrv dirsrv 517M Jul 5 12:58 /var/lib/dirsrv/slapd- instance_name /db/ userroot /replication_changelog.db To be able to reset the parameters after reducing the changelog size, display and note the current values of the corresponding parameters: # dsconf -D " cn=Directory Manager " ldap://server.example.com replication get-changelog --suffix " dc=example,dc=com " dn: cn=changelog, cn=userroot ,cn=ldbm database,cn=plugins,cn=config cn: changelog nsslapd-changelogmaxage: 4w nsslapd-changelogtrim-interval: 300 If you do not see any specific attributes in the output, Directory Server uses their default values. Temporarily, reduce trimming-related parameters: # dsconf -D " cn=Directory Manager " ldap://server.example.com replication set-changelog --suffix " dc=example,dc=com " --max-age " 300s " --max-entries 500 --trim-interval 60 Important For performance reasons, do not permanently use too short interval settings. Wait until the time set in the --trim-interval parameter expires. Compact the changelog to regain disk space: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend compact-db --only-changelog Reset the changelog parameters to the values they had before you temporarily reduced them: # dsconf -D " cn=Directory Manager " ldap://server.example.com replication set-changelog --suffix " dc=example,dc=com " --max-age " 4w " --trim-interval 300 Verification Display the size of the changelog: # ls -lh /var/lib/dirsrv/slapd- instance_name /db/ userroot /replication_changelog.db -rw-------. 1 dirsrv dirsrv 12M Jul 5 12:58 /var/lib/dirsrv/slapd- instance_name /db/ userroot /replication_changelog.db | [
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com replication set-changelog --suffix \" dc=example,dc=com \" --max-age \" 4w \"",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com replication set-changelog --suffix \" dc=example,dc=com \" --max-entries \" 100000 \"",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com replication set-changelog --suffix \" dc=example,dc=com \" --trim-interval 600",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com replication get-changelog --suffix \" dc=example,dc=com \" dn: cn=changelog, cn=userroot ,cn=ldbm database,cn=plugins,cn=config cn: changelog nsslapd-changelogmaxage: 4w nsslapd-changelogtrim-interval: 600",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend suffix list dc=example,dc=com ( userroot )",
"ls -lh /var/lib/dirsrv/slapd- instance_name /db/ userroot /replication_changelog.db -rw-------. 1 dirsrv dirsrv 517M Jul 5 12:58 /var/lib/dirsrv/slapd- instance_name /db/ userroot /replication_changelog.db",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com replication get-changelog --suffix \" dc=example,dc=com \" dn: cn=changelog, cn=userroot ,cn=ldbm database,cn=plugins,cn=config cn: changelog nsslapd-changelogmaxage: 4w nsslapd-changelogtrim-interval: 300",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com replication set-changelog --suffix \" dc=example,dc=com \" --max-age \" 300s \" --max-entries 500 --trim-interval 60",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend compact-db --only-changelog",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com replication set-changelog --suffix \" dc=example,dc=com \" --max-age \" 4w \" --trim-interval 300",
"ls -lh /var/lib/dirsrv/slapd- instance_name /db/ userroot /replication_changelog.db -rw-------. 1 dirsrv dirsrv 12M Jul 5 12:58 /var/lib/dirsrv/slapd- instance_name /db/ userroot /replication_changelog.db"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_and_managing_replication/assembly_trimming-the-replication-changelog_configuring-and-managing-replication |
4.9. Additional Fencing Configuration Options | 4.9. Additional Fencing Configuration Options Table 4.2, "Advanced Properties of Fencing Devices" . summarizes additional properties you can set for fencing devices. Note that these properties are for advanced use only. Table 4.2. Advanced Properties of Fencing Devices Field Type Default Description pcmk_host_argument string port An alternate parameter to supply instead of port. Some devices do not support the standard port parameter or may provide additional ones. Use this to specify an alternate, device-specific, parameter that should indicate the machine to be fenced. A value of none can be used to tell the cluster not to supply any additional parameters. pcmk_reboot_action string reboot An alternate command to run instead of reboot . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the reboot action. pcmk_reboot_timeout time 60s Specify an alternate timeout to use for reboot actions instead of stonith-timeout . Some devices need much more/less time to complete than normal. Use this to specify an alternate, device-specific, timeout for reboot actions. pcmk_reboot_retries integer 2 The maximum number of times to retry the reboot command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries reboot actions before giving up. pcmk_off_action string off An alternate command to run instead of off . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the off action. pcmk_off_timeout time 60s Specify an alternate timeout to use for off actions instead of stonith-timeout . Some devices need much more or much less time to complete than normal. Use this to specify an alternate, device-specific, timeout for off actions. pcmk_off_retries integer 2 The maximum number of times to retry the off command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries off actions before giving up. pcmk_list_action string list An alternate command to run instead of list . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the list action. pcmk_list_timeout time 60s Specify an alternate timeout to use for list actions instead of stonith-timeout . Some devices need much more or much less time to complete than normal. Use this to specify an alternate, device-specific, timeout for list actions. pcmk_list_retries integer 2 The maximum number of times to retry the list command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries list actions before giving up. pcmk_monitor_action string monitor An alternate command to run instead of monitor . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the monitor action. pcmk_monitor_timeout time 60s Specify an alternate timeout to use for monitor actions instead of stonith-timeout . Some devices need much more or much less time to complete than normal. Use this to specify an alternate, device-specific, timeout for monitor actions. pcmk_monitor_retries integer 2 The maximum number of times to retry the monitor command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries monitor actions before giving up. pcmk_status_action string status An alternate command to run instead of status . Some devices do not support the standard commands or may provide additional ones. Use this to specify an alternate, device-specific, command that implements the status action. pcmk_status_timeout time 60s Specify an alternate timeout to use for status actions instead of stonith-timeout . Some devices need much more or much less time to complete than normal. Use this to specify an alternate, device-specific, timeout for status actions. pcmk_status_retries integer 2 The maximum number of times to retry the status command within the timeout period. Some devices do not support multiple connections. Operations may fail if the device is busy with another task so Pacemaker will automatically retry the operation, if there is time remaining. Use this option to alter the number of times Pacemaker retries status actions before giving up. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-fencedevicesadditional-haar |
Chapter 19. Applying security context to Streams for Apache Kafka pods and containers | Chapter 19. Applying security context to Streams for Apache Kafka pods and containers Security context defines constraints on pods and containers. By specifying a security context, pods and containers only have the permissions they need. For example, permissions can control runtime operations or access to resources. 19.1. Handling of security context by OpenShift platform Handling of security context depends on the tooling of the OpenShift platform you are using. For example, OpenShift uses built-in security context constraints (SCCs) to control permissions. SCCs are the settings and strategies that control the security features a pod has access to. By default, OpenShift injects security context configuration automatically. In most cases, this means you don't need to configure security context for the pods and containers created by the Cluster Operator. Although you can still create and manage your own SCCs. For more information, see the OpenShift documentation . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/assembly-security-providers-str |
23.2. Using the Maintenance Boot Modes | 23.2. Using the Maintenance Boot Modes 23.2.1. Loading the Memory (RAM) Testing Mode Faults in memory (RAM) modules can cause your system to freeze or crash unpredictably. In certain situations, memory faults might only cause errors with particular combinations of software. For this reason, you should test the memory of a computer before you install Red Hat Enterprise Linux for the first time, even if it has previously run other operating systems. Red Hat Enterprise Linux includes the Memtest86+ memory testing application. To start memory testing mode, choose Troubleshooting > Memory test at the boot menu. Testing will begin immediately. By default, Memtest86+ carries out ten tests in every pass; a different configuration can be specified by accessing the configuration screen using the c key. After the first pass completes, a message will appear at the bottom informing you of the current status, and another pass will start automatically. Note Memtest86+ only works on BIOS systems. Support for UEFI systems is currently unavailable. Figure 23.1. Memory Check Using Memtest86+ The main screen displayed while testing is in progress is divided into three main areas: The upper left corner shows information about your system's memory configuration - the amount of detected memory and processor cache and their throughputs and processor and chipset information. This information is detected when Memtest86+ starts. The upper right corner displays information about the tests - progress of the current pass and the currently running test in that pass as well as a description of the test. The central part of the screen is used to display information about the entire set of tests from the moment when the tool has started, such as the total time, the number of completed passes, number of detected errors and your test selection. On some systems, detailed information about the installed memory (such as the number of installed modules, their manufacturer, frequency and latency) will be also displayed here. After the each pass completes, a short summary will appear in this location. For example: If Memtest86+ detects an error, it will also be displayed in this area and highlighted red. The message will include detailed information such as which test detected a problem, the memory location which is failing, and others. In most cases, a single successful pass (that is, a single run of all 10 tests) is sufficient to verify that your RAM is in good condition. In some rare circumstances, however, errors that went undetected on the first pass might appear on subsequent passes. To perform a thorough test on an important system, leave the tests running overnight or even for a few days in order to complete multiple passes. Note The amount of time it takes to complete a single full pass of Memtest86+ varies depending on your system's configuration (notably the RAM size and speed). For example, on a system with 2 GiB of DDR2 memory at 667 MHz, a single pass will take roughly 20 minutes to complete. To halt the tests and reboot your computer, press the Esc key at any time. For more information about using Memtest86+ , see the official website at http://www.memtest.org/ . A README file is also located in /usr/share/doc/memtest86+- version / on Red Hat Enterprise Linux systems with the memtest86+ package installed. 23.2.2. Verifying Boot Media You can test the integrity of an ISO-based installation source before using it to install Red Hat Enterprise Linux. These sources include DVD, and ISO images stored on a hard drive or NFS server. Verifying that the ISO images are intact before you attempt an installation helps to avoid problems that are often encountered during installation. To test the checksum integrity of an ISO image, append the rd.live.check to the boot loader command line. Note that this option is used automatically if you select the default installation option from the boot menu ( Test this media & install Red Hat Enterprise Linux 7.0 ). 23.2.3. Booting Your Computer in Rescue Mode You can boot a command-line Linux system from an installation disc without actually installing Red Hat Enterprise Linux on the computer. This enables you to use the utilities and functions of a running Linux system to modify or repair already installed operating systems. To load the rescue system with the installation disk or USB drive, choose Rescue a Red Hat Enterprise Linux system from the Troubleshooting submenu in the boot menu, or use the inst.rescue boot option. Specify the language, keyboard layout and network settings for the rescue system with the screens that follow. The final setup screen configures access to the existing system on your computer. By default, rescue mode attaches an existing operating system to the rescue system under the directory /mnt/sysimage/ . For additional information about rescue mode and other maintenance modes, see Chapter 32, Basic System Recovery . | [
"** Pass complete, no errors, press Esc to exit **"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-boot-options-maintenance |
Chapter 1. Release notes | Chapter 1. Release notes Release notes contain information about new and deprecated features, breaking changes, and known issues. The following release notes apply for the most recent OpenShift Serverless releases on OpenShift Container Platform. For an overview of OpenShift Serverless functionality, see About OpenShift Serverless . Note OpenShift Serverless is based on the open source Knative project. For details about the latest Knative component releases, see the Knative blog . 1.1. About API versions API versions are an important measure of the development status of certain features and custom resources in OpenShift Serverless. Creating resources on your cluster that do not use the correct API version can cause issues in your deployment. The OpenShift Serverless Operator automatically upgrades older resources that use deprecated versions of APIs to use the latest version. For example, if you have created resources on your cluster that use older versions of the ApiServerSource API, such as v1beta1 , the OpenShift Serverless Operator automatically updates these resources to use the v1 version of the API when this is available and the v1beta1 version is deprecated. After they have been deprecated, older versions of APIs might be removed in any upcoming release. Using deprecated versions of APIs does not cause resources to fail. However, if you try to use a version of an API that has been removed, it will cause resources to fail. Ensure that your manifests are updated to use the latest version to avoid issues. 1.2. Generally Available and Technology Preview features Features which are Generally Available (GA) are fully supported and are suitable for production use. Technology Preview (TP) features are experimental features and are not intended for production use. See the Technology Preview scope of support on the Red Hat Customer Portal for more information about TP features. The following table provides information about which OpenShift Serverless features are GA and which are TP: Table 1.1. Generally Available and Technology Preview features tracker Feature 1.23 1.24 kn func TP TP kn func invoke TP TP Service Mesh mTLS GA GA emptyDir volumes GA GA HTTPS redirection GA GA Kafka broker TP TP Kafka sink TP TP Init containers support for Knative services TP GA PVC support for Knative services TP TP 1.3. Deprecated and removed features Some features that were Generally Available (GA) or a Technology Preview (TP) in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Serverless and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within OpenShift Serverless, refer to the following table: Table 1.2. Deprecated and removed features tracker Feature 1.20 1.21 1.22 1.23 1.24 KafkaBinding API Deprecated Deprecated Removed Removed Removed kn func emit ( kn func invoke in 1.21+) Deprecated Removed Removed Removed Removed 1.4. Release notes for Red Hat OpenShift Serverless 1.24.0 OpenShift Serverless 1.24.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.4.1. New features OpenShift Serverless now uses Knative Serving 1.3. OpenShift Serverless now uses Knative Eventing 1.3. OpenShift Serverless now uses Kourier 1.3. OpenShift Serverless now uses Knative kn CLI 1.3. OpenShift Serverless now uses Knative Kafka 1.3. The kn func CLI plug-in now uses func 0.24. Init containers support for Knative services is now generally available (GA). OpenShift Serverless logic is now available as a Developer Preview. It enables defining declarative workflow models for managing serverless applications. You can now use the cost management service with OpenShift Serverless. 1.4.2. Fixed issues Integrating OpenShift Serverless with Red Hat OpenShift Service Mesh causes the net-istio-controller pod to run out of memory on startup when too many secrets are present on the cluster. It is now possible to enable secret filtering, which causes net-istio-controller to consider only secrets with a networking.internal.knative.dev/certificate-uid label, thus reducing the amount of memory needed. The OpenShift Serverless Functions Technology Preview now uses Cloud Native Buildpacks by default to build container images. 1.4.3. Known issues The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker, Kafka source, and Kafka sink. In OpenShift Serverless 1.23, support for KafkaBindings and the kafka-binding webhook were removed. However, an existing kafkabindings.webhook.kafka.sources.knative.dev MutatingWebhookConfiguration might remain, pointing to the kafka-source-webhook service, which no longer exists. For certain specifications of KafkaBindings on the cluster, kafkabindings.webhook.kafka.sources.knative.dev MutatingWebhookConfiguration might be configured to pass any create and update events to various resources, such as Deployments, Knative Services, or Jobs, through the webhook, which would then fail. To work around this issue, manually delete kafkabindings.webhook.kafka.sources.knative.dev MutatingWebhookConfiguration from the cluster after upgrading to OpenShift Serverless 1.23: USD oc delete mutatingwebhookconfiguration kafkabindings.webhook.kafka.sources.knative.dev 1.5. Release notes for Red Hat OpenShift Serverless 1.23.0 OpenShift Serverless 1.23.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.5.1. New features OpenShift Serverless now uses Knative Serving 1.2. OpenShift Serverless now uses Knative Eventing 1.2. OpenShift Serverless now uses Kourier 1.2. OpenShift Serverless now uses Knative ( kn ) CLI 1.2. OpenShift Serverless now uses Knative Kafka 1.2. The kn func CLI plug-in now uses func 0.24. It is now possible to use the kafka.eventing.knative.dev/external.topic annotation with the Kafka broker. This annotation makes it possible to use an existing externally managed topic instead of the broker creating its own internal topic. The kafka-ch-controller and kafka-webhook Kafka components no longer exist. These components have been replaced by the kafka-webhook-eventing component. The OpenShift Serverless Functions Technology Preview now uses Source-to-Image (S2I) by default to build container images. 1.5.2. Known issues The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker, Kafka source, and Kafka sink. If you delete a namespace that includes a Kafka broker, the namespace finalizer may fail to be removed if the broker's auth.secret.ref.name secret is deleted before the broker. Running OpenShift Serverless with a large number of Knative services can cause Knative activator pods to run close to their default memory limits of 600MB. These pods might be restarted if memory consumption reaches this limit. Requests and limits for the activator deployment can be configured by modifying the KnativeServing custom resource: apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: deployments: - name: activator resources: - container: activator requests: cpu: 300m memory: 60Mi limits: cpu: 1000m memory: 1000Mi If you are using Cloud Native Buildpacks as the local build strategy for a function, kn func is unable to automatically start podman or use an SSH tunnel to a remote daemon. The workaround for these issues is to have a Docker or podman daemon already running on the local development computer before deploying a function. On-cluster function builds currently fail for Quarkus and Golang runtimes. They work correctly for Node, Typescript, Python, and Springboot runtimes. Additional resources Source-to-Image 1.6. Release notes for Red Hat OpenShift Serverless 1.22.0 OpenShift Serverless 1.22.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.6.1. New features OpenShift Serverless now uses Knative Serving 1.1. OpenShift Serverless now uses Knative Eventing 1.1. OpenShift Serverless now uses Kourier 1.1. OpenShift Serverless now uses Knative ( kn ) CLI 1.1. OpenShift Serverless now uses Knative Kafka 1.1. The kn func CLI plug-in now uses func 0.23. Init containers support for Knative services is now available as a Technology Preview. Persistent volume claim (PVC) support for Knative services is now available as a Technology Preview. The knative-serving , knative-serving-ingress , knative-eventing and knative-kafka system namespaces now have the knative.openshift.io/part-of: "openshift-serverless" label by default. The Knative Eventing - Kafka Broker/Trigger dashboard has been added, which allows visualizing Kafka broker and trigger metrics in the web console. The Knative Eventing - KafkaSink dashboard has been added, which allows visualizing KafkaSink metrics in the web console. The Knative Eventing - Broker/Trigger dashboard is now called Knative Eventing - Channel-based Broker/Trigger . The knative.openshift.io/part-of: "openshift-serverless" label has substituted the knative.openshift.io/system-namespace label. Naming style in Knative Serving YAML configuration files changed from camel case ( ExampleName ) to hyphen style ( example-name ). Beginning with this release, use the hyphen style notation when creating or editing Knative Serving YAML configuration files. 1.6.2. Known issues The Federal Information Processing Standards (FIPS) mode is disabled for Kafka broker, Kafka source, and Kafka sink. 1.7. Release notes for Red Hat OpenShift Serverless 1.21.0 OpenShift Serverless 1.21.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.7.1. New features OpenShift Serverless now uses Knative Serving 1.0 OpenShift Serverless now uses Knative Eventing 1.0. OpenShift Serverless now uses Kourier 1.0. OpenShift Serverless now uses Knative ( kn ) CLI 1.0. OpenShift Serverless now uses Knative Kafka 1.0. The kn func CLI plug-in now uses func 0.21. The Kafka sink is now available as a Technology Preview. The Knative open source project has begun to deprecate camel-cased configuration keys in favor of using kebab-cased keys consistently. As a result, the defaultExternalScheme key, previously mentioned in the OpenShift Serverless 1.18.0 release notes, is now deprecated and replaced by the default-external-scheme key. Usage instructions for the key remain the same. 1.7.2. Fixed issues In OpenShift Serverless 1.20.0, there was an event delivery issue affecting the use of kn event send to send events to a service. This issue is now fixed. In OpenShift Serverless 1.20.0 ( func 0.20), TypeScript functions created with the http template failed to deploy on the cluster. This issue is now fixed. In OpenShift Serverless 1.20.0 ( func 0.20), deploying a function using the gcr.io registry failed with an error. This issue is now fixed. In OpenShift Serverless 1.20.0 ( func 0.20), creating a Springboot function project directory with the kn func create command and then running the kn func build command failed with an error message. This issue is now fixed. In OpenShift Serverless 1.19.0 ( func 0.19), some runtimes were unable to build a function by using podman. This issue is now fixed. 1.7.3. Known issues Currently, the domain mapping controller cannot process the URI of a broker, which contains a path that is currently not supported. This means that, if you want to use a DomainMapping custom resource (CR) to map a custom domain to a broker, you must configure the DomainMapping CR with the broker's ingress service, and append the exact path of the broker to the custom domain: Example DomainMapping CR apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain-name> namespace: knative-eventing spec: ref: name: broker-ingress kind: Service apiVersion: v1 The URI for the broker is then <domain-name>/<broker-namespace>/<broker-name> . 1.8. Release notes for Red Hat OpenShift Serverless 1.20.0 OpenShift Serverless 1.20.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.8.1. New features OpenShift Serverless now uses Knative Serving 0.26. OpenShift Serverless now uses Knative Eventing 0.26. OpenShift Serverless now uses Kourier 0.26. OpenShift Serverless now uses Knative ( kn ) CLI 0.26. OpenShift Serverless now uses Knative Kafka 0.26. The kn func CLI plug-in now uses func 0.20. The Kafka broker is now available as a Technology Preview. Important The Kafka broker, which is currently in Technology Preview, is not supported on FIPS. The kn event plug-in is now available as a Technology Preview. The --min-scale and --max-scale flags for the kn service create command have been deprecated. Use the --scale-min and --scale-max flags instead. 1.8.2. Known issues OpenShift Serverless deploys Knative services with a default address that uses HTTPS. When sending an event to a resource inside the cluster, the sender does not have the cluster certificate authority (CA) configured. This causes event delivery to fail, unless the cluster uses globally accepted certificates. For example, an event delivery to a publicly accessible address works: USD kn event send --to-url https://ce-api.foo.example.com/ On the other hand, this delivery fails if the service uses a public address with an HTTPS certificate issued by a custom CA: USD kn event send --to Service:serving.knative.dev/v1:event-display Sending an event to other addressable objects, such as brokers or channels, is not affected by this issue and works as expected. The Kafka broker currently does not work on a cluster with Federal Information Processing Standards (FIPS) mode enabled. If you create a Springboot function project directory with the kn func create command, subsequent running of the kn func build command fails with this error message: [analyzer] no stack metadata found at path '' [analyzer] ERROR: failed to : set API for buildpack 'paketo-buildpacks/[email protected]': buildpack API version '0.7' is incompatible with the lifecycle As a workaround, you can change the builder property to gcr.io/paketo-buildpacks/builder:base in the function configuration file func.yaml . Deploying a function using the gcr.io registry fails with this error message: Error: failed to get credentials: failed to verify credentials: status code: 404 As a workaround, use a different registry than gcr.io , such as quay.io or docker.io . TypeScript functions created with the http template fail to deploy on the cluster. As a workaround, in the func.yaml file, replace the following section: buildEnvs: [] with this: buildEnvs: - name: BP_NODE_RUN_SCRIPTS value: build In func version 0.20, some runtimes might be unable to build a function by using podman. You might see an error message similar to the following: ERROR: failed to image: error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info": EOF The following workaround exists for this issue: Update the podman service by adding --time=0 to the service ExecStart definition: Example service configuration ExecStart=/usr/bin/podman USDLOGGING system service --time=0 Restart the podman service by running the following commands: USD systemctl --user daemon-reload USD systemctl restart --user podman.socket Alternatively, you can expose the podman API by using TCP: USD podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534 1.9. Release notes for Red Hat OpenShift Serverless 1.19.0 OpenShift Serverless 1.19.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.9.1. New features OpenShift Serverless now uses Knative Serving 0.25. OpenShift Serverless now uses Knative Eventing 0.25. OpenShift Serverless now uses Kourier 0.25. OpenShift Serverless now uses Knative ( kn ) CLI 0.25. OpenShift Serverless now uses Knative Kafka 0.25. The kn func CLI plug-in now uses func 0.19. The KafkaBinding API is deprecated in OpenShift Serverless 1.19.0 and will be removed in a future release. HTTPS redirection is now supported and can be configured either globally for a cluster or per each Knative service. 1.9.2. Fixed issues In releases, the Kafka channel dispatcher waited only for the local commit to succeed before responding, which might have caused lost events in the case of an Apache Kafka node failure. The Kafka channel dispatcher now waits for all in-sync replicas to commit before responding. 1.9.3. Known issues In func version 0.19, some runtimes might be unable to build a function by using podman. You might see an error message similar to the following: ERROR: failed to image: error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info": EOF The following workaround exists for this issue: Update the podman service by adding --time=0 to the service ExecStart definition: Example service configuration ExecStart=/usr/bin/podman USDLOGGING system service --time=0 Restart the podman service by running the following commands: USD systemctl --user daemon-reload USD systemctl restart --user podman.socket Alternatively, you can expose the podman API by using TCP: USD podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534 1.10. Release notes for Red Hat OpenShift Serverless 1.18.0 OpenShift Serverless 1.18.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.10.1. New features OpenShift Serverless now uses Knative Serving 0.24.0. OpenShift Serverless now uses Knative Eventing 0.24.0. OpenShift Serverless now uses Kourier 0.24.0. OpenShift Serverless now uses Knative ( kn ) CLI 0.24.0. OpenShift Serverless now uses Knative Kafka 0.24.7. The kn func CLI plug-in now uses func 0.18.0. In the upcoming OpenShift Serverless 1.19.0 release, the URL scheme of external routes will default to HTTPS for enhanced security. If you do not want this change to apply for your workloads, you can override the default setting before upgrading to 1.19.0, by adding the following YAML to your KnativeServing custom resource (CR): ... spec: config: network: defaultExternalScheme: "http" ... If you want the change to apply in 1.18.0 already, add the following YAML: ... spec: config: network: defaultExternalScheme: "https" ... In the upcoming OpenShift Serverless 1.19.0 release, the default service type by which the Kourier Gateway is exposed will be ClusterIP and not LoadBalancer . If you do not want this change to apply to your workloads, you can override the default setting before upgrading to 1.19.0, by adding the following YAML to your KnativeServing custom resource (CR): ... spec: ingress: kourier: service-type: LoadBalancer ... You can now use emptyDir volumes with OpenShift Serverless. See the OpenShift Serverless documentation about Knative Serving for details. Rust templates are now available when you create a function using kn func . 1.10.2. Fixed issues The prior 1.4 version of Camel-K was not compatible with OpenShift Serverless 1.17.0. The issue in Camel-K has been fixed, and Camel-K version 1.4.1 can be used with OpenShift Serverless 1.17.0. Previously, if you created a new subscription for a Kafka channel, or a new Kafka source, a delay was possible in the Kafka data plane becoming ready to dispatch messages after the newly created subscription or sink reported a ready status. As a result, messages that were sent during the time when the data plane was not reporting a ready status, might not have been delivered to the subscriber or sink. In OpenShift Serverless 1.18.0, the issue is fixed and the initial messages are no longer lost. For more information about the issue, see Knowledgebase Article #6343981 . 1.10.3. Known issues Older versions of the Knative kn CLI might use older versions of the Knative Serving and Knative Eventing APIs. For example, version 0.23.2 of the kn CLI uses the v1alpha1 API version. On the other hand, newer releases of OpenShift Serverless might no longer support older API versions. For example, OpenShift Serverless 1.18.0 no longer supports version v1alpha1 of the kafkasources.sources.knative.dev API. Consequently, using an older version of the Knative kn CLI with a newer OpenShift Serverless might produce an error because the kn cannot find the outdated API. For example, version 0.23.2 of the kn CLI does not work with OpenShift Serverless 1.18.0. To avoid issues, use the latest kn CLI version available for your OpenShift Serverless release. For OpenShift Serverless 1.18.0, use Knative kn CLI 0.24.0. 1.11. Release Notes for Red Hat OpenShift Serverless 1.17.0 OpenShift Serverless 1.17.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.11.1. New features OpenShift Serverless now uses Knative Serving 0.23.0. OpenShift Serverless now uses Knative Eventing 0.23.0. OpenShift Serverless now uses Kourier 0.23.0. OpenShift Serverless now uses Knative kn CLI 0.23.0. OpenShift Serverless now uses Knative Kafka 0.23.0. The kn func CLI plug-in now uses func 0.17.0. In the upcoming OpenShift Serverless 1.19.0 release, the URL scheme of external routes will default to HTTPS for enhanced security. If you do not want this change to apply for your workloads, you can override the default setting before upgrading to 1.19.0, by adding the following YAML to your KnativeServing custom resource (CR): ... spec: config: network: defaultExternalScheme: "http" ... mTLS functionality is now Generally Available (GA). TypeScript templates are now available when you create a function using kn func . Changes to API versions in Knative Eventing 0.23.0: The v1alpha1 version of the KafkaChannel API, which was deprecated in OpenShift Serverless version 1.14.0, has been removed. If the ChannelTemplateSpec parameters of your config maps contain references to this older version, you must update this part of the spec to use the correct API version. 1.11.2. Known issues If you try to use an older version of the Knative kn CLI with a newer OpenShift Serverless release, the API is not found and an error occurs. For example, if you use the 1.16.0 release of the kn CLI, which uses version 0.22.0, with the 1.17.0 OpenShift Serverless release, which uses the 0.23.0 versions of the Knative Serving and Knative Eventing APIs, the CLI does not work because it continues to look for the outdated 0.22.0 API versions. Ensure that you are using the latest kn CLI version for your OpenShift Serverless release to avoid issues. Kafka channel metrics are not monitored or shown in the corresponding web console dashboard in this release. This is due to a breaking change in the Kafka dispatcher reconciling process. If you create a new subscription for a Kafka channel, or a new Kafka source, there might be a delay in the Kafka data plane becoming ready to dispatch messages after the newly created subscription or sink reports a ready status. As a result, messages that are sent during the time when the data plane is not reporting a ready status might not be delivered to the subscriber or sink. For more information about this issue and possible workarounds, see Knowledge Article #6343981 . The Camel-K 1.4 release is not compatible with OpenShift Serverless version 1.17.0. This is because Camel-K 1.4 uses APIs that were removed in Knative version 0.23.0. There is currently no workaround available for this issue. If you need to use Camel-K 1.4 with OpenShift Serverless, do not upgrade to OpenShift Serverless version 1.17.0. Note The issue has been fixed, and Camel-K version 1.4.1 is compatible with OpenShift Serverless 1.17.0. 1.12. Release Notes for Red Hat OpenShift Serverless 1.16.0 OpenShift Serverless 1.16.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.12.1. New features OpenShift Serverless now uses Knative Serving 0.22.0. OpenShift Serverless now uses Knative Eventing 0.22.0. OpenShift Serverless now uses Kourier 0.22.0. OpenShift Serverless now uses Knative kn CLI 0.22.0. OpenShift Serverless now uses Knative Kafka 0.22.0. The kn func CLI plug-in now uses func 0.16.0. The kn func emit command has been added to the functions kn plug-in. You can use this command to send events to test locally deployed functions. 1.12.2. Known issues You must upgrade OpenShift Container Platform to version 4.6.30, 4.7.11, or higher before upgrading to OpenShift Serverless 1.16.0. The AMQ Streams Operator might prevent the installation or upgrade of the OpenShift Serverless Operator. If this happens, the following error is thrown by Operator Lifecycle Manager (OLM): WARNING: found multiple channel heads: [amqstreams.v1.7.2 amqstreams.v1.6.2], please check the `replaces`/`skipRange` fields of the operator bundles. You can fix this issue by uninstalling the AMQ Streams Operator before installing or upgrading the OpenShift Serverless Operator. You can then reinstall the AMQ Streams Operator. If Service Mesh is enabled with mTLS, metrics for Knative Serving are disabled by default because Service Mesh prevents Prometheus from scraping metrics. For instructions on enabling Knative Serving metrics for use with Service Mesh and mTLS, see the "Integrating Service Mesh with OpenShift Serverless" section of the Serverless documentation. If you deploy Service Mesh CRs with the Istio ingress enabled, you might see the following warning in the istio-ingressgateway pod: 2021-05-02T12:56:17.700398Z warning envoy config [external/envoy/source/common/config/grpc_subscription_impl.cc:101] gRPC config for type.googleapis.com/envoy.api.v2.Listener rejected: Error adding/updating listener(s) 0.0.0.0_8081: duplicate listener 0.0.0.0_8081 found Your Knative services might also not be accessible. You can use the following workaround to fix this issue by recreating the knative-local-gateway service: Delete the existing knative-local-gateway service in the istio-system namespace: USD oc delete services -n istio-system knative-local-gateway Create and apply a knative-local-gateway service that contains the following YAML: apiVersion: v1 kind: Service metadata: name: knative-local-gateway namespace: istio-system labels: experimental.istio.io/disable-gateway-port-translation: "true" spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8081 If you have 1000 Knative services on a cluster, and then perform a reinstall or upgrade of Knative Serving, there is a delay when you create the first new service after the KnativeServing custom resource (CR) becomes Ready . The 3scale-kourier-control service reconciles all previously existing Knative services before processing the creation of a new service, which causes the new service to spend approximately 800 seconds in an IngressNotConfigured or Unknown state before the state updates to Ready . If you create a new subscription for a Kafka channel, or a new Kafka source, there might be a delay in the Kafka data plane becoming ready to dispatch messages after the newly created subscription or sink reports a ready status. As a result, messages that are sent during the time when the data plane is not reporting a ready status might not be delivered to the subscriber or sink. For more information about this issue and possible workarounds, see Knowledge Article #6343981 . 1.13. Release Notes for Red Hat OpenShift Serverless 1.15.0 OpenShift Serverless 1.15.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.13.1. New features OpenShift Serverless now uses Knative Serving 0.21.0. OpenShift Serverless now uses Knative Eventing 0.21.0. OpenShift Serverless now uses Kourier 0.21.0. OpenShift Serverless now uses Knative kn CLI 0.21.0. OpenShift Serverless now uses Knative Kafka 0.21.1. OpenShift Serverless Functions is now available as a Technology Preview. Important The serving.knative.dev/visibility label, which was previously used to create private services, is now deprecated. You must update existing services to use the networking.knative.dev/visibility label instead. 1.13.2. Known issues If you create a new subscription for a Kafka channel, or a new Kafka source, there might be a delay in the Kafka data plane becoming ready to dispatch messages after the newly created subscription or sink reports a ready status. As a result, messages that are sent during the time when the data plane is not reporting a ready status might not be delivered to the subscriber or sink. For more information about this issue and possible workarounds, see Knowledge Article #6343981 . 1.14. Release Notes for Red Hat OpenShift Serverless 1.14.0 OpenShift Serverless 1.14.0 is now available. New features, changes, and known issues that pertain to OpenShift Serverless on OpenShift Container Platform are included in this topic. 1.14.1. New features OpenShift Serverless now uses Knative Serving 0.20.0. OpenShift Serverless uses Knative Eventing 0.20.0. OpenShift Serverless now uses Kourier 0.20.0. OpenShift Serverless now uses Knative kn CLI 0.20.0. OpenShift Serverless now uses Knative Kafka 0.20.0. Knative Kafka on OpenShift Serverless is now Generally Available (GA). Important Only the v1beta1 version of the APIs for KafkaChannel and KafkaSource objects on OpenShift Serverless are supported. Do not use the v1alpha1 version of these APIs, as this version is now deprecated. The Operator channel for installing and upgrading OpenShift Serverless has been updated to stable for OpenShift Container Platform 4.6 and newer versions. OpenShift Serverless is now supported on IBM Power Systems, IBM Z, and LinuxONE, except for the following features, which are not yet supported: Knative Kafka functionality. OpenShift Serverless Functions developer preview. 1.14.2. Known issues Subscriptions for the Kafka channel sometimes fail to become marked as READY and remain in the SubscriptionNotMarkedReadyByChannel state. You can fix this by restarting the dispatcher for the Kafka channel. If you create a new subscription for a Kafka channel, or a new Kafka source, there might be a delay in the Kafka data plane becoming ready to dispatch messages after the newly created subscription or sink reports a ready status. As a result, messages that are sent during the time when the data plane is not reporting a ready status might not be delivered to the subscriber or sink. For more information about this issue and possible workarounds, see Knowledge Article #6343981 . | [
"oc delete mutatingwebhookconfiguration kafkabindings.webhook.kafka.sources.knative.dev",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: deployments: - name: activator resources: - container: activator requests: cpu: 300m memory: 60Mi limits: cpu: 1000m memory: 1000Mi",
"apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain-name> namespace: knative-eventing spec: ref: name: broker-ingress kind: Service apiVersion: v1",
"kn event send --to-url https://ce-api.foo.example.com/",
"kn event send --to Service:serving.knative.dev/v1:event-display",
"[analyzer] no stack metadata found at path '' [analyzer] ERROR: failed to : set API for buildpack 'paketo-buildpacks/[email protected]': buildpack API version '0.7' is incompatible with the lifecycle",
"Error: failed to get credentials: failed to verify credentials: status code: 404",
"buildEnvs: []",
"buildEnvs: - name: BP_NODE_RUN_SCRIPTS value: build",
"ERROR: failed to image: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info\": EOF",
"ExecStart=/usr/bin/podman USDLOGGING system service --time=0",
"systemctl --user daemon-reload",
"systemctl restart --user podman.socket",
"podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534",
"ERROR: failed to image: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info\": EOF",
"ExecStart=/usr/bin/podman USDLOGGING system service --time=0",
"systemctl --user daemon-reload",
"systemctl restart --user podman.socket",
"podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534",
"spec: config: network: defaultExternalScheme: \"http\"",
"spec: config: network: defaultExternalScheme: \"https\"",
"spec: ingress: kourier: service-type: LoadBalancer",
"spec: config: network: defaultExternalScheme: \"http\"",
"WARNING: found multiple channel heads: [amqstreams.v1.7.2 amqstreams.v1.6.2], please check the `replaces`/`skipRange` fields of the operator bundles.",
"2021-05-02T12:56:17.700398Z warning envoy config [external/envoy/source/common/config/grpc_subscription_impl.cc:101] gRPC config for type.googleapis.com/envoy.api.v2.Listener rejected: Error adding/updating listener(s) 0.0.0.0_8081: duplicate listener 0.0.0.0_8081 found",
"oc delete services -n istio-system knative-local-gateway",
"apiVersion: v1 kind: Service metadata: name: knative-local-gateway namespace: istio-system labels: experimental.istio.io/disable-gateway-port-translation: \"true\" spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8081"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/serverless/serverless-release-notes |
function::user_short | function::user_short Name function::user_short - Retrieves a short value stored in user space. Synopsis Arguments addr The user space address to retrieve the short from. General Syntax user_short:long(addr:long) Description Returns the short value from a given user space address. Returns zero when user space data is not accessible. | [
"function user_short:long(addr:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-user-short |
Chapter 9. Sources | Chapter 9. Sources The updated Red Hat Ceph Storage source code packages are available at the following location: For Red Hat Enterprise Linux 8: http://ftp.redhat.com/redhat/linux/enterprise/8Base/en/RHCEPH/SRPMS/ | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/5.1_release_notes/sources |
Chapter 46. l2gw | Chapter 46. l2gw This chapter describes the commands under the l2gw command. 46.1. l2gw connection create Create l2gateway-connection Usage: Table 46.1. Positional arguments Value Summary <GATEWAY-NAME/UUID> Descriptive name for logical gateway. <NETWORK-NAME/UUID> Network name or uuid. Table 46.2. Command arguments Value Summary -h, --help Show this help message and exit --default-segmentation-id SEG_ID Default segmentation-id that will be applied to the interfaces for which segmentation id was not specified in l2-gateway-create command. Table 46.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 46.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 46.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 46.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 46.2. l2gw connection delete Delete a given l2gateway-connection Usage: Table 46.7. Positional arguments Value Summary <L2_GATEWAY_CONNECTIONS> Id(s) of l2_gateway_connections(s) to delete. Table 46.8. Command arguments Value Summary -h, --help Show this help message and exit 46.3. l2gw connection list List l2gateway-connections Usage: Table 46.9. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 46.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 46.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 46.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 46.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 46.4. l2gw connection show Show information of a given l2gateway-connection Usage: Table 46.14. Positional arguments Value Summary <L2_GATEWAY_CONNECTION> Id of l2_gateway_connection to look up. Table 46.15. Command arguments Value Summary -h, --help Show this help message and exit Table 46.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 46.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 46.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 46.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 46.5. l2gw create Create l2gateway resource Usage: Table 46.20. Positional arguments Value Summary <GATEWAY-NAME> Descriptive name for logical gateway. Table 46.21. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --device name=name,interface_names=INTERFACE-DETAILS Device name and interface-names of l2gateway. INTERFACE-DETAILS is of form "<interface_name1>;[<inte rface_name2>][|<seg_id1>[#<seg_id2>]]" (--device option can be repeated) Table 46.22. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 46.23. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 46.24. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 46.25. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 46.6. l2gw delete Delete a given l2gateway Usage: Table 46.26. Positional arguments Value Summary <L2_GATEWAY> Id(s) or name(s) of l2_gateway to delete. Table 46.27. Command arguments Value Summary -h, --help Show this help message and exit 46.7. l2gw list List l2gateway that belongs to a given tenant Usage: Table 46.28. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 46.29. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 46.30. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 46.31. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 46.32. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 46.8. l2gw show Show information of a given l2gateway Usage: Table 46.33. Positional arguments Value Summary <L2_GATEWAY> Id or name of l2_gateway to look up. Table 46.34. Command arguments Value Summary -h, --help Show this help message and exit Table 46.35. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 46.36. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 46.37. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 46.38. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 46.9. l2gw update Update a given l2gateway Usage: Table 46.39. Positional arguments Value Summary <L2_GATEWAY> Id or name of l2_gateway to update. Table 46.40. Command arguments Value Summary -h, --help Show this help message and exit --name name Descriptive name for logical gateway. --device name=name,interface_names=INTERFACE-DETAILS Device name and interface-names of l2gateway. INTERFACE-DETAILS is of form "<interface_name1>;[<inte rface_name2>][|<seg_id1>[#<seg_id2>]]" (--device option can be repeated) Table 46.41. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 46.42. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 46.43. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 46.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack l2gw connection create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--default-segmentation-id SEG_ID] <GATEWAY-NAME/UUID> <NETWORK-NAME/UUID>",
"openstack l2gw connection delete [-h] <L2_GATEWAY_CONNECTIONS> [<L2_GATEWAY_CONNECTIONS> ...]",
"openstack l2gw connection list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--project <project>] [--project-domain <project-domain>]",
"openstack l2gw connection show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <L2_GATEWAY_CONNECTION>",
"openstack l2gw create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--project <project>] [--project-domain <project-domain>] [--device name=name,interface_names=INTERFACE-DETAILS] <GATEWAY-NAME>",
"openstack l2gw delete [-h] <L2_GATEWAY> [<L2_GATEWAY> ...]",
"openstack l2gw list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--project <project>] [--project-domain <project-domain>]",
"openstack l2gw show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <L2_GATEWAY>",
"openstack l2gw update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name name] [--device name=name,interface_names=INTERFACE-DETAILS] <L2_GATEWAY>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/l2gw |
Chapter 21. Monitoring application performance with perf | Chapter 21. Monitoring application performance with perf You can use the perf tool to monitor and analyze application performance. 21.1. Attaching perf record to a running process You can attach perf record to a running process. This will instruct perf record to only sample and record performance data in the specified processes. Prerequisites You have the perf user space tool installed as described in Installing perf . Procedure Attach perf record to a running process: The example samples and records performance data of the processes with the process ID's ID1 and ID2 for a time period of seconds seconds as dictated by using the sleep command. You can also configure perf to record events in specific threads: Note When using the -t flag and stipulating thread ID's, perf disables inheritance by default. You can enable inheritance by adding the --inherit option. 21.2. Capturing call graph data with perf record You can configure the perf record tool so that it records which function is calling other functions in the performance profile. This helps to identify a bottleneck if several processes are calling the same function. Prerequisites You have the perf user space tool installed as described in Installing perf . Procedure Sample and record performance data with the --call-graph option: Replace command with the command you want to sample data during. If you do not specify a command, then perf record will sample data until you manually stop it by pressing Ctrl + C . Replace method with one of the following unwinding methods: fp Uses the frame pointer method. Depending on compiler optimization, such as with binaries built with the GCC option --fomit-frame-pointer , this may not be able to unwind the stack. dwarf Uses DWARF Call Frame Information to unwind the stack. lbr Uses the last branch record hardware on Intel processors. Additional resources perf-record(1) man page on your system 21.3. Analyzing perf.data with perf report You can use perf report to display and analyze a perf.data file. Prerequisites You have the perf user space tool installed as described in Installing perf . There is a perf.data file in the current directory. If the perf.data file was created with root access, you need to run perf report with root access too. Procedure Display the contents of the perf.data file for further analysis: This command displays output similar to the following: Additional resources perf-report(1) man page on your system | [
"perf record -p ID1,ID2 sleep seconds",
"perf record -t ID1,ID2 sleep seconds",
"perf record --call-graph method command",
"perf report",
"Samples: 2K of event 'cycles', Event count (approx.): 235462960 Overhead Command Shared Object Symbol 2.36% kswapd0 [kernel.kallsyms] [k] page_vma_mapped_walk 2.13% sssd_kcm libc-2.28.so [.] memset_avx2_erms 2.13% perf [kernel.kallsyms] [k] smp_call_function_single 1.53% gnome-shell libc-2.28.so [.] strcmp_avx2 1.17% gnome-shell libglib-2.0.so.0.5600.4 [.] g_hash_table_lookup 0.93% Xorg libc-2.28.so [.] memmove_avx_unaligned_erms 0.89% gnome-shell libgobject-2.0.so.0.5600.4 [.] g_object_unref 0.87% kswapd0 [kernel.kallsyms] [k] page_referenced_one 0.86% gnome-shell libc-2.28.so [.] memmove_avx_unaligned_erms 0.83% Xorg [kernel.kallsyms] [k] alloc_vmap_area 0.63% gnome-shell libglib-2.0.so.0.5600.4 [.] g_slice_alloc 0.53% gnome-shell libgirepository-1.0.so.1.0.0 [.] g_base_info_unref 0.53% gnome-shell ld-2.28.so [.] _dl_find_dso_for_object 0.49% kswapd0 [kernel.kallsyms] [k] vma_interval_tree_iter_next 0.48% gnome-shell libpthread-2.28.so [.] pthread_getspecific 0.47% gnome-shell libgirepository-1.0.so.1.0.0 [.] 0x0000000000013b1d 0.45% gnome-shell libglib-2.0.so.0.5600.4 [.] g_slice_free1 0.45% gnome-shell libgobject-2.0.so.0.5600.4 [.] g_type_check_instance_is_fundamentally_a 0.44% gnome-shell libc-2.28.so [.] malloc 0.41% swapper [kernel.kallsyms] [k] apic_timer_interrupt 0.40% gnome-shell ld-2.28.so [.] _dl_lookup_symbol_x 0.39% kswapd0 [kernel.kallsyms] [k] raw_callee_save___pv_queued_spin_unlock"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/monitoring-application-performance-with-perf_monitoring-and-managing-system-status-and-performance |
Chapter 14. Using Generic JMS | Chapter 14. Using Generic JMS Abstract Apache CXF provides a generic implementation of a JMS transport. The generic JMS transport is not restricted to using SOAP messages and allows for connecting to any application that uses JMS. NOTE : Support for the JMS 1.0.2 APIs has been removed in CXF 3.0. If you are using RedHat JBoss Fuse 6.2 or higher (includes CXF 3.0), your JMS provider must support the JMS 1.1 APIs. 14.1. Approaches to Configuring JMS The Apache CXF generic JMS transport can connect to any JMS provider and work with applications that exchange JMS messages with bodies of either TextMessage or ByteMessage . There are two ways to enable and configure the JMS transport: Section 14.2, "Using the JMS configuration bean" Section 14.5, "Using WSDL to configure JMS" 14.2. Using the JMS configuration bean Overview To simplify JMS configuration and make it more powerful, Apache CXF uses a single JMS configuration bean to configure JMS endpoints. The bean is implemented by the org.apache.cxf.transport.jms.JMSConfiguration class. It can be used to either configure endpoint's directly or to configure the JMS conduits and destinations. Configuration namespace The JMS configuration bean uses the Spring p-namespace to make the configuration as simple as possible. To use this namespace you need to declare it in the configuration's root element as shown in Example 14.1, "Declaring the Spring p-namespace" . Example 14.1. Declaring the Spring p-namespace Specifying the configuration You specify the JMS configuration by defining a bean of class org.apache.cxf.transport.jms.JMSConfiguration . The properties of the bean provide the configuration settings for the transport. Important In CXF 3.0, the JMS transport no longer has a dependency on Spring JMS, so some Spring JMS-related options have been removed. Table 14.1, "General JMS Configuration Properties" lists properties that are common to both providers and consumers. Table 14.1. General JMS Configuration Properties Property Default Description connectionFactory [Required] Specifies a reference to a bean that defines a JMS ConnectionFactory. wrapInSingleConnectionFactory true [pre v3.0] Removed in CXF 3.0 pre CXF 3.0 Specifies whether to wrap the ConnectionFactory with a Spring SingleConnectionFactory . Enable this property when using a ConnectionFactory that does not pool connections, as it will improve the performance of the JMS transport. This is so because the JMS transport creates a new connection for each message, and the SingleConnectionFactory is needed to cache the connection, so it can be reused. reconnectOnException false Deprecated in CXF 3.0 CXF always reconnects when an exception occurs. pre CXF 3.0 Specifies whether to create a new connection when an exception occurs. When wrapping the ConnectionFactory with a Spring SingleConnectionFactory : true - on an exception, create a new connection Do not enable this option when using a PooledConnectionFactory, as this option only returns the pooled connection, but does not reconnect. false - on an exception, do not try to reconnect targetDestination Specifies the JNDI name or provider-specific name of a destination. replyDestination Specifies the JMS name of the JMS destination where replies are sent. This property allows the use of a user-defined destination for replies. For more details see Section 14.6, "Using a Named Reply Destination" . destinationResolver DynamicDestinationResolver Specifies a reference to a Spring DestinationResolver . This property allows you to define how destination names are resolved to JMS destinations. Valid values are: DynamicDestinationResolver - resolve destination names using the features of the JMS provider. JndiDestinationResolver - resolve destination names using JNDI. transactionManager Specifies a reference to a Spring transaction manager. This enables the service to participate in JTA transactions. taskExecutor SimpleAsyncTaskExecutor Removed in CXF 3.0 pre CXF 3.0 Specifies a reference to a Spring TaskExecutor. This is used in listeners to decide how to handle incoming messages. useJms11 false Removed in CXF 3.0 CXF 3.0 supports JMS 1.1 features only. pre CXF 3.0 Specifies whether JMS 1.1 features are used. Valid values are: true - JMS 1.1 features false - JMS 1.0.2 features messageIdEnabled true Removed in CXF 3.0 pre CXF 3.0 Specifies whether the JMS transport wants the JMS broker to provide message IDs. Valid values are: true - broker needs to provide message IDs false - broker need not provide message IDs In this case, the endpoint calls its message producer's setDisableMessageID() method with a value of true . The broker is then given a hint that it need not generate message IDs or add them to the endpoint's messages. The broker either accepts the hint or ignores it. messageTimestampEnabled true Removed in CXF 3.0 pre CXF 3.0 Specifies whether the JMS transport wants the JMS broker to provide message time stamps. Valid values are: true - broker needs to provide message timestamps false - broker need not provide message timestamps In this case, the endpoint calls its message producer's setDisableMessageTimestamp() method with a value of true . The broker is then given a hint that it need not generate time stamps or add them to the endpoint's messages. The broker either accepts the hint or ignores it. cacheLevel -1 (feature disabled) Removed in CXF 3.0 pre CXF 3.0 Specifies the level of caching that the JMS listener container may apply. Valid values are: 0 - CACHE_NONE 1 - CACHE_CONNECTION 2 - CACHE_SESSION 3 - CACHE_CONSUMER 4 - CACHE_AUTO For details, see Class DefaultMessageListenerContainer pubSubNoLocal false Specifies whether to receive your own messages when using topics. true - do not receive your own messages false - receive your own messages receiveTimeout 60000 Specifies the time, in milliseconds, to wait for response messages. explicitQosEnabled false Specifies whether the QoS settings (such as priority, persistence, time to live) are explicitly set for each message ( true ) or use the default values ( false ). deliveryMode 2 Specifies whether a message is persistent. Valid values are: 1 (NON_PERSISTENT)-messages are kept memory only 2 (PERSISTENT)-messages are persisted to disk priority 4 Specifies message priority. JMS priority values range from 0 (lowest) to 9 (highest). See your JMS provider's documentation for details. timeToLive 0 (indefinitely) Specifies the time, in milliseconds, before a message that has been sent is discarded. sessionTransacted false Specifies whether JMS transactions are used. concurrentConsumers 1 Removed in CXF 3.0 pre CXF 3.0 Specifies the minimum number of concurrent consumers for the listener. maxConcurrentConsumers 1 Removed in CXF 3.0 pre CXF 3.0 Specifies the maximum number of concurrent consumers for the listener. messageSelector Specifies the string value of the selector used to filter incoming messages. This property enables multiple connections to share a queue. For more information on the syntax used to specify message selectors, see the JMS 1.1 specification . subscriptionDurable false Specifies whether the server uses durable subscriptions. durableSubscriptionName Specifies the name (string) used to register the durable subscription. messageType text Specifies how the message data will be packaged as a JMS message. Valid values are: text - specifies that the data will be packaged as a TextMessage byte - specifies that the data will be packaged as an array of bytes ( byte[] ) binary - specifies that the data will be packaged as an ByteMessage pubSubDomain false Specifies whether the target destination is a topic or a queue. Valid values are: true - topic false - queue jmsProviderTibcoEms false Specifies whether the JMS provider is Tibco EMS. When set to true , the principal in the security context is populated from the JMS_TIBCO_SENDER header. useMessageIDAsCorrelationID false Removed in CXF 3.0 Specifies whether JMS will use the message ID to correlate messages. When set to true , the client sets a generated correlation ID. maxSuspendedContinuations -1 (feature disabled) CXF 3.0 Specifies the maximum number of suspended continuations the JMS destination may have. When the current number exceeds the specified maximum, the JMSListenerContainer is stopped. reconnectPercentOfMax 70 CXF 3.0 Specifies when to restart the JMSListenerContainer stopped for exceeding maxSuspendedContinuations . The listener container is restarted when its current number of suspended continuations falls below the value of (maxSuspendedContinuations * reconnectPercentOfMax/100) . As shown in Example 14.2, "JMS configuration bean" , the bean's properties are specified as attributes to the bean element. They are all declared in the Spring p namespace. Example 14.2. JMS configuration bean Applying the configuration to an endpoint The JMSConfiguration bean can be applied directly to both server and client endpoints using the Apache CXF features mechanism. To do so: Set the endpoint's address attribute to jms:// . Add a jaxws:feature element to the endpoint's configuration. Add a bean of type org.apache.cxf.transport.jms.JMSConfigFeature to the feature. Set the bean element's p:jmsConfig-ref attribute to the ID of the JMSConfiguration bean. Example 14.3, "Adding JMS configuration to a JAX-WS client" shows a JAX-WS client that uses the JMS configuration from Example 14.2, "JMS configuration bean" . Example 14.3. Adding JMS configuration to a JAX-WS client Applying the configuration to the transport The JMSConfiguration bean can be applied to JMS conduits and JMS destinations using the jms:jmsConfig-ref element. The jms:jmsConfig-ref element's value is the ID of the JMSConfiguration bean. Example 14.4, "Adding JMS configuration to a JMS conduit" shows a JMS conduit that uses the JMS configuration from Example 14.2, "JMS configuration bean" . Example 14.4. Adding JMS configuration to a JMS conduit 14.3. Optimizing Client-Side JMS Performance Overview Two major settings affect the JMS performance of clients: pooling and synchronous receives. Pooling On the client side, CXF creates a new JMS session and JMS producer for each message. This is so because neither session nor producer objects are thread safe. Creating a producer is especially time intensive because it requires communicating with the server. Pooling connection factories improves performance by caching the connection, session, and producer. For ActiveMQ, configuring pooling is simple; for example: For more information on pooling, see "Appendix A Optimizing Performance of JMS Single- and Multiple-Resource Transactions" in the Red Hat JBoss Fuse Transaction Guide Avoiding synchronous receives For request/reply exchanges, the JMS transport sends a request and then waits for a reply. Whenever possible, request/reply messaging is implemented asynchronously using a JMS MessageListener . However, CXF must use a synchronous Consumer.receive() method when it needs to share queues between endpoints. This scenario requires the MessageListener to use a message selector to filter the messages. The message selector must be known in advance, so the MessageListener is opened only once. Two cases in which the message selector cannot be known in advance should be avoided: When JMSMessageID is used as the JMSCorrelationID If the JMS properties useConduitIdSelector and conduitSelectorPrefix are not set on the JMS transport, the client does not set a JMSCorrelationId . This causes the server to use the JMSMessageId of the request message as the JMSCorrelationId . As JMSMessageID cannot be known in advance, the client has to use a synchronous Consumer.receive() method. Note that you must use the Consumer.receive() method with IBM JMS endpoints (their default). The user sets the JMStype in the request message and then sets a custom JMSCorrelationID . Again, as the custom JMSCorrelationID cannot be known in advance, the client has to use a synchronous Consumer.receive() method. So the general rule is to avoid using settings that require using a synchronous receive. 14.4. Configuring JMS Transactions Overview CXF 3.0 supports both local JMS transactions and JTA transactions on CXF endpoints, when using one-way messaging. Local transactions Transactions using local resources roll back the JMS message only when an exception occurs. They do not directly coordinate other resources, such as database transactions. To set up a local transaction, configure the endpoint as you normally would, and set the property sessionTrasnsacted to true . Note For more information on transactions and pooling, see the Red Hat JBoss Fuse Transaction Guide . JTA transactions Using JTA transactions, you can coordinate any number of XA resources. If a CXF endpoint is configured for JTA transactions, it starts a transaction before calling the service implementation. The transaction will be committed if no exception occurs. Otherwise, it will be rolled back. In JTA transactions, a JMS message is consumed and the data written to a database. When an exception occurs, both resources are rolled back, so either the message is consumed and the data is written to the database, or the message is rolled back and the data is not written to the database. Configuring JTA transactions requires two steps: Defining a transaction manager bean method Define a transaction manager Set the name of the transaction manager in the JMS URI This example finds a bean with the ID TransactionManager . OSGi reference method Look up the transaction manager as an OSGi service using Blueprint Set the name of the transaction manager in the JMS URI This example looks up the transaction manager in JNDI. Configuring a JCA pooled connection factory Using Spring to define the JCA pooled connection factory: In this example, the first bean defines an ActiveMQ XA connection factory, which is given to a JcaPooledConnectionFactory . The JcaPooledConnectionFactory is then provided as the default bean with id ConnectionFactory . Note that the JcaPooledConnectionFactory looks like a normal ConnectionFactory. But when a new connection and session are opened, it checks for an XA transaction and, if found, automatically registers the JMS session as an XA resource. This allows the JMS session to participate in the JMS transaction. Important Directly setting an XA ConnectionFactory on the JMS transport will not work! 14.5. Using WSDL to configure JMS 14.5.1. JMS WSDL Extension Namespance The WSDL extensions for defining a JMS endpoint are defined in the namespace http://cxf.apache.org/transports/jms . In order to use the JMS extensions you will need to add the line shown in Example 14.5, "JMS WSDL extension namespace" to the definitions element of your contract. Example 14.5. JMS WSDL extension namespace 14.5.2. Basic JMS configuration Overview The JMS address information is provided using the jms:address element and its child, the jms:JMSNamingProperties element. The jms:address element's attributes specify the information needed to identify the JMS broker and the destination. The jms:JMSNamingProperties element specifies the Java properties used to connect to the JNDI service. Important Information specified using the JMS feature will override the information in the endpoint's WSDL file. Specifying the JMS address The basic configuration for a JMS endpoint is done by using a jms:address element as the child of your service's port element. The jms:address element used in WSDL is identical to the one used in the configuration file. Its attributes are listed in Table 14.2, "JMS endpoint attributes" . Table 14.2. JMS endpoint attributes Attribute Description destinationStyle Specifies if the JMS destination is a JMS queue or a JMS topic. jndiConnectionFactoryName Specifies the JNDI name bound to the JMS connection factory to use when connecting to the JMS destination. jmsDestinationName Specifies the JMS name of the JMS destination to which requests are sent. jmsReplyDestinationName Specifies the JMS name of the JMS destinations where replies are sent. This attribute allows you to use a user defined destination for replies. For more details see Section 14.6, "Using a Named Reply Destination" . jndiDestinationName Specifies the JNDI name bound to the JMS destination to which requests are sent. jndiReplyDestinationName Specifies the JNDI name bound to the JMS destinations where replies are sent. This attribute allows you to use a user defined destination for replies. For more details see Section 14.6, "Using a Named Reply Destination" . connectionUserName Specifies the user name to use when connecting to a JMS broker. connectionPassword Specifies the password to use when connecting to a JMS broker. The jms:address WSDL element uses a jms:JMSNamingProperties child element to specify additional information needed to connect to a JNDI provider. Specifying JNDI properties To increase interoperability with JMS and JNDI providers, the jms:address element has a child element, jms:JMSNamingProperties , that allows you to specify the values used to populate the properties used when connecting to the JNDI provider. The jms:JMSNamingProperties element has two attributes: name and value . name specifies the name of the property to set. value attribute specifies the value for the specified property. jms:JMSNamingProperties element can also be used for specification of provider specific properties. The following is a list of common JNDI properties that can be set: java.naming.factory.initial java.naming.provider.url java.naming.factory.object java.naming.factory.state java.naming.factory.url.pkgs java.naming.dns.url java.naming.authoritative java.naming.batchsize java.naming.referral java.naming.security.protocol java.naming.security.authentication java.naming.security.principal java.naming.security.credentials java.naming.language java.naming.applet For more details on what information to use in these attributes, check your JNDI provider's documentation and consult the Java API reference material. Example Example 14.6, "JMS WSDL port specification" shows an example of a JMS WSDL port specification. Example 14.6. JMS WSDL port specification 14.5.3. JMS client configuration Overview JMS consumer endpoints specify the type of messages they use. JMS consumer endpoint can use either a JMS ByteMessage or a JMS TextMessage . When using an ByteMessage the consumer endpoint uses a byte[] as the method for storing data into and retrieving data from the JMS message body. When messages are sent, the message data, including any formating information, is packaged into a byte[] and placed into the message body before it is placed on the wire. When messages are received, the consumer endpoint will attempt to unmarshall the data stored in the message body as if it were packed in a byte[] . When using a TextMessage , the consumer endpoint uses a string as the method for storing and retrieving data from the message body. When messages are sent, the message information, including any format-specific information, is converted into a string and placed into the JMS message body. When messages are received the consumer endpoint will attempt to unmarshall the data stored in the JMS message body as if it were packed into a string. When native JMS applications interact with Apache CXF consumers, the JMS application is responsible for interpreting the message and the formatting information. For example, if the Apache CXF contract specifies that the binding used for a JMS endpoint is SOAP, and the messages are packaged as TextMessage , the receiving JMS application will get a text message containing all of the SOAP envelope information. Specifying the message type The type of messages accepted by a JMS consumer endpoint is configured using the optional jms:client element. The jms:client element is a child of the WSDL port element and has one attribute: Table 14.3. JMS Client WSDL Extensions messageType Specifies how the message data will be packaged as a JMS message. text specifies that the data will be packaged as a TextMessage . binary specifies that the data will be packaged as an ByteMessage . Example Example 14.7, "WSDL for a JMS consumer endpoint" shows the WSDL for configuring a JMS consumer endpoint. Example 14.7. WSDL for a JMS consumer endpoint 14.5.4. JMS provider configuration Overview JMS provider endpoints have a number of behaviors that are configurable. These include: how messages are correlated the use of durable subscriptions if the service uses local JMS transactions the message selectors used by the endpoint Specifying the configuration Provider endpoint behaviors are configured using the optional jms:server element. The jms:server element is a child of the WSDL wsdl:port element and has the following attributes: Table 14.4. JMS provider endpoint WSDL extensions Attribute Description useMessageIDAsCorrealationID Specifies whether JMS will use the message ID to correlate messages. The default is false . durableSubscriberName Specifies the name used to register a durable subscription. messageSelector Specifies the string value of a message selector to use. For more information on the syntax used to specify message selectors, see the JMS 1.1 specification. transactional Specifies whether the local JMS broker will create transactions around message processing. The default is false . [a] [a] Currently, setting the transactional attribute to true is not supported by the runtime. Example Example 14.8, "WSDL for a JMS provider endpoint" shows the WSDL for configuring a JMS provider endpoint. Example 14.8. WSDL for a JMS provider endpoint 14.6. Using a Named Reply Destination Overview By default, Apache CXF endpoints using JMS create a temporary queue for sending replies back and forth. If you prefer to use named queues, you can configure the queue used to send replies as part of an endpoint's JMS configuration. Setting the reply destination name You specify the reply destination using either the jmsReplyDestinationName attribute or the jndiReplyDestinationName attribute in the endpoint's JMS configuration. A client endpoint will listen for replies on the specified destination and it will specify the value of the attribute in the ReplyTo field of all outgoing requests. A service endpoint will use the value of the jndiReplyDestinationName attribute as the location for placing replies if there is no destination specified in the request's ReplyTo field. Example Example 14.9, "JMS Consumer Specification Using a Named Reply Queue" shows the configuration for a JMS client endpoint. Example 14.9. JMS Consumer Specification Using a Named Reply Queue | [
"<beans xmlns:p=\"http://www.springframework.org/schema/p\" ... > </beans>",
"<bean id=\"jmsConfig\" class=\"org.apache.cxf.transport.jms.JMSConfiguration\" p:connectionFactory=\"jmsConnectionFactory\" p:targetDestination=\"dynamicQueues/greeter.request.queue\" p:pubSubDomain=\"false\" />",
"<jaxws:client id=\"CustomerService\" xmlns:customer=\"http://customerservice.example.com/\" serviceName=\"customer:CustomerServiceService\" endpointName=\"customer:CustomerServiceEndpoint\" address=\"jms://\" serviceClass=\"com.example.customerservice.CustomerService\"> <jaxws:features> <bean xmlns=\"http://www.springframework.org/schema/beans\" class=\"org.apache.cxf.transport.jms.JMSConfigFeature\" p:jmsConfig-ref=\"jmsConfig\"/> </jaxws:features> </jaxws:client>",
"<jms:conduit name=\"{http://cxf.apache.org/jms_conf_test}HelloWorldQueueBinMsgPort.jms-conduit\"> <jms:jmsConfig-ref>jmsConf</jms:jmsConfig-ref> </jms:conduit>",
"import org.apache.activemq.ActiveMQConnectionFactory; import org.apache.activemq.pool.PooledConnectionFactory; ConnectionFactory cf = new ActiveMQConnectionFactory(\"tcp://localhost:61616\"); PooledConnectionFactory pcf = new PooledConnectionFactory(); //Set expiry timeout because the default (0) prevents reconnection on failure pcf.setExpiryTimeout(5000); pcf.setConnectionFactory(cf); JMSConfiguration jmsConfig = new JMSConfiguration(); jmsConfig.setConnectionFactory(pdf);",
"<bean id=\"transactionManager\" class=\"org.apache.geronimo.transaction.manager.GeronimoTransactionManager\"/>",
"jms:queue:myqueue?jndiTransactionManager=TransactionManager",
"<reference id=\"TransactionManager\" interface=\"javax.transaction.TransactionManager\"/>",
"jms:jndi:myqueue?jndiTransactionManager=java:comp/env/TransactionManager",
"<bean id=\"xacf\" class=\"org.apache.activemq.ActiveMQXAConnectionFactory\"> <property name=\"brokerURL\" value=\"tcp://localhost:61616\" /> </bean> <bean id=\"ConnectionFactory\" class=\"org.apache.activemq.jms.pool.JcaPooledConnectionFactory\"> <property name=\"transactionManager\" ref=\"transactionManager\" /> <property name=\"connectionFactory\" ref=\"xacf\" /> </bean>",
"xmlns:jms=\"http://cxf.apache.org/transports/jms\"",
"<service name=\"JMSService\"> <port binding=\"tns:Greeter_SOAPBinding\" name=\"SoapPort\"> <jms:address jndiConnectionFactoryName=\"ConnectionFactory\" jndiDestinationName=\"dynamicQueues/test.Celtix.jmstransport\" > <jms:JMSNamingProperty name=\"java.naming.factory.initial\" value=\"org.activemq.jndi.ActiveMQInitialContextFactory\" /> <jms:JMSNamingProperty name=\"java.naming.provider.url\" value=\"tcp://localhost:61616\" /> </jms:address> </port> </service>",
"<service name=\"JMSService\"> <port binding=\"tns:Greeter_SOAPBinding\" name=\"SoapPort\"> <jms:address jndiConnectionFactoryName=\"ConnectionFactory\" jndiDestinationName=\"dynamicQueues/test.Celtix.jmstransport\" > <jms:JMSNamingProperty name=\"java.naming.factory.initial\" value=\"org.activemq.jndi.ActiveMQInitialContextFactory\" /> <jms:JMSNamingProperty name=\"java.naming.provider.url\" value=\"tcp://localhost:61616\" /> </jms:address> <jms:client messageType=\"binary\" /> </port> </service>",
"<service name=\"JMSService\"> <port binding=\"tns:Greeter_SOAPBinding\" name=\"SoapPort\"> <jms:address jndiConnectionFactoryName=\"ConnectionFactory\" jndiDestinationName=\"dynamicQueues/test.Celtix.jmstransport\" > <jms:JMSNamingProperty name=\"java.naming.factory.initial\" value=\"org.activemq.jndi.ActiveMQInitialContextFactory\" /> <jms:JMSNamingProperty name=\"java.naming.provider.url\" value=\"tcp://localhost:61616\" /> </jms:address> <jms:server messageSelector=\"cxf_message_selector\" useMessageIDAsCorrelationID=\"true\" transactional=\"true\" durableSubscriberName=\"cxf_subscriber\" /> </port> </service>",
"<jms:conduit name=\"{http://cxf.apache.org/jms_endpt}HelloWorldJMSPort.jms-conduit\"> <jms:address destinationStyle=\"queue\" jndiConnectionFactoryName=\"myConnectionFactory\" jndiDestinationName=\"myDestination\" jndiReplyDestinationName=\"myReplyDestination\" > <jms:JMSNamingProperty name=\"java.naming.factory.initial\" value=\"org.apache.cxf.transport.jms.MyInitialContextFactory\" /> <jms:JMSNamingProperty name=\"java.naming.provider.url\" value=\"tcp://localhost:61616\" /> </jms:address> </jms:conduit>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/FUSECXFJMS |
Chapter 3. Automation mesh design patterns | Chapter 3. Automation mesh design patterns The automation mesh topologies in this section provide examples you can use to design a mesh deployment in your environment. Examples range from a simple, hydrid node deployment to a complex pattern that deploys numerous automation controller instances, employing several execution and hop nodes. Prerequisites You reviewed conceptual information on node types and relationships Note The following examples include images that illustrate the mesh topology. The arrows in the images indicate the direction of peering. After peering is established, the connection between the nodes allows bidirectional communication. 3.1. Multiple hybrid nodes inventory file example This example inventory file deploys a control plane consisting of multiple hybrid nodes. The nodes in the control plane are automatically peered to one another. [automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com The following image displays the topology of this mesh network. The default node_type for nodes in the control plane is hybrid . You can explicitly set the node_type of individual nodes to hybrid in the [automationcontroller group] : [automationcontroller] aap_c_1.example.com node_type=hybrid aap_c_2.example.com node_type=hybrid aap_c_3.example.com node_type=hybrid Alternatively, you can set the node-type of all nodes in the [automationcontroller] group. When you add new nodes to the control plane they are automatically set to hybrid nodes. [automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [automationcontroller:vars] node_type=hybrid If you think that you might add control nodes to your control plane in future, it is better to define a separate group for the hybrid nodes, and set the node-type for the group: [automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [hybrid_group] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [hybrid_group:vars] node_type=hybrid 3.2. Single node control plane with single execution node This example inventory file deploys a single-node control plane and establishes a peer relationship to an execution node. [automationcontroller] aap_c_1.example.com [automationcontroller:vars] node_type=control peers=execution_nodes [execution_nodes] aap_e_1.example.com The following image displays the topology of this mesh network. The [automationcontroller] stanza defines the control nodes. If you add a new node to the automationcontroller group, it will automatically peer with the aap_c_1.example.com node. The [automationcontroller:vars] stanza sets the node type to control for all nodes in the control plane and defines how the nodes peer to the execution nodes: If you add a new node to the execution_nodes group, the control plane nodes automatically peer to it. If you add a new node to the automationcontroller group, the node type is set to control . The [execution_nodes] stanza lists all the execution and hop nodes in the inventory. The default node type is execution . You can specify the node type for an individual node: [execution_nodes] aap_e_1.example.com node_type=execution Alternatively, you can set the node_type of all execution nodes in the [execution_nodes] group. When you add new nodes to the group, they are automatically set to execution nodes. [execution_nodes] aap_e_1.example.com [execution_nodes:vars] node_type=execution If you plan to add hop nodes to your inventory in future, it is better to define a separate group for the execution nodes, and set the node_type for the group: [execution_nodes] aap_e_1.example.com [local_execution_group] aap_e_1.example.com [local_execution_group:vars] node_type=execution 3.3. Minimum resilient configuration This example inventory file deploys a control plane consisting of two control nodes, and two execution nodes. All nodes in the control plane are automatically peered to one another. All nodes in the control plane are peered with all nodes in the execution_nodes group. This configuration is resilient because the execution nodes are reachable from all control nodes. The capacity algorithm determines which control node is chosen when a job is launched. Refer to Automation controller Capacity Determination and Job Impact in the Automation Controller User Guide for more information. The following inventory file defines this configuration. [automationcontroller] aap_c_1.example.com aap_c_2.example.com [automationcontroller:vars] node_type=control peers=execution_nodes [execution_nodes] aap_e_1.example.com aap_e_1.example.com The [automationcontroller] stanza defines the control nodes. All nodes in the control plane are peered to one another. If you add a new node to the automationcontroller group, it will automatically peer with the original nodes. The [automationcontroller:vars] stanza sets the node type to control for all nodes in the control plane and defines how the nodes peer to the execution nodes: If you add a new node to the execution_nodes group, the control plane nodes automatically peer to it. If you add a new node to the automationcontroller group, the node type is set to control . The following image displays the topology of this mesh network. 3.4. Segregated local and remote execution configuration This configuration adds a hop node and a remote execution node to the resilient configuration. The remote execution node is reachable from the hop node. You can use this setup if you are setting up execution nodes in a remote location, or if you need to run automation in a DMZ network. [automationcontroller] aap_c_1.example.com aap_c_2.example.com [automationcontroller:vars] node_type=control peers=instance_group_local [execution_nodes] aap_e_1.example.com aap_e_2.example.com aap_h_1.example.com aap_e_3.example.com [instance_group_local] aap_e_1.example.com aap_e_2.example.com [hop] aap_h_1.example.com [hop:vars] peers=automationcontroller [instance_group_remote] aap_e_3.example.com [instance_group_remote:vars] peers=hop The following image displays the topology of this mesh network. The [automationcontroller:vars] stanza sets the node types for all nodes in the control plane and defines how the control nodes peer to the local execution nodes: All nodes in the control plane are automatically peered to one another. All nodes in the control plane are peered with all local execution nodes. If the name of a group of nodes begins with instance_group_ , the installer recognises it as an instance group and adds it to the Ansible Automation Platform user interface. 3.5. Multi-hopped execution node In this configuration, resilient controller nodes are peered with resilient local execution nodes. Resilient local hop nodes are peered with the controller nodes. A remote execution node and a remote hop node are peered with the local hop nodes. You can use this setup if you need to run automation in a DMZ network from a remote network. [automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [automationcontroller:vars] node_type=control peers=instance_group_local [execution_nodes] aap_e_1.example.com aap_e_2.example.com aap_e_3.example.com aap_e_4.example.com aap_h_1.example.com node_type=hop aap_h_2.example.com node_type=hop aap_h_3.example.com node_type=hop [instance_group_local] aap_e_1.example.com aap_e_2.example.com [instance_group_remote] aap_e_3.example.com [instance_group_remote:vars] peers=local_hop [instance_group_multi_hop_remote] aap_e_4.example.com [instance_group_multi_hop_remote:vars] peers=remote_multi_hop [local_hop] aap_h_1.example.com aap_h_2.example.com [local_hop:vars] peers=automationcontroller [remote_multi_hop] aap_h_3 peers=local_hop The following image displays the topology of this mesh network. The [automationcontroller:vars] stanza sets the node types for all nodes in the control plane and defines how the control nodes peer to the local execution nodes: All nodes in the control plane are automatically peered to one another. All nodes in the control plane are peered with all local execution nodes. The [local_hop:vars] stanza peers all nodes in the [local_hop] group with all the control nodes. If the name of a group of nodes begins with instance_group_ , the installer recognises it as an instance group and adds it to the Ansible Automation Platform user interface. 3.6. Outbound only connections to controller nodes This example inventory file deploys a control plane consisting of two control nodes, and several execution nodes. Only outbound connections are allowed to the controller nodes All nodes in the 'execution_nodes' group are peered with all nodes in the controller plane. [automationcontroller] controller-[1:2].example.com [execution_nodes] execution-[1:5].example.com [execution_nodes:vars] # connection is established *from* the execution nodes *to* the automationcontroller peers=automationcontroller The following image displays the topology of this mesh network. | [
"[automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com",
"[automationcontroller] aap_c_1.example.com node_type=hybrid aap_c_2.example.com node_type=hybrid aap_c_3.example.com node_type=hybrid",
"[automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [automationcontroller:vars] node_type=hybrid",
"[automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [hybrid_group] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [hybrid_group:vars] node_type=hybrid",
"[automationcontroller] aap_c_1.example.com [automationcontroller:vars] node_type=control peers=execution_nodes [execution_nodes] aap_e_1.example.com",
"[execution_nodes] aap_e_1.example.com node_type=execution",
"[execution_nodes] aap_e_1.example.com [execution_nodes:vars] node_type=execution",
"[execution_nodes] aap_e_1.example.com [local_execution_group] aap_e_1.example.com [local_execution_group:vars] node_type=execution",
"[automationcontroller] aap_c_1.example.com aap_c_2.example.com [automationcontroller:vars] node_type=control peers=execution_nodes [execution_nodes] aap_e_1.example.com aap_e_1.example.com",
"[automationcontroller] aap_c_1.example.com aap_c_2.example.com [automationcontroller:vars] node_type=control peers=instance_group_local [execution_nodes] aap_e_1.example.com aap_e_2.example.com aap_h_1.example.com aap_e_3.example.com [instance_group_local] aap_e_1.example.com aap_e_2.example.com [hop] aap_h_1.example.com [hop:vars] peers=automationcontroller [instance_group_remote] aap_e_3.example.com [instance_group_remote:vars] peers=hop",
"[automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [automationcontroller:vars] node_type=control peers=instance_group_local [execution_nodes] aap_e_1.example.com aap_e_2.example.com aap_e_3.example.com aap_e_4.example.com aap_h_1.example.com node_type=hop aap_h_2.example.com node_type=hop aap_h_3.example.com node_type=hop [instance_group_local] aap_e_1.example.com aap_e_2.example.com [instance_group_remote] aap_e_3.example.com [instance_group_remote:vars] peers=local_hop [instance_group_multi_hop_remote] aap_e_4.example.com [instance_group_multi_hop_remote:vars] peers=remote_multi_hop [local_hop] aap_h_1.example.com aap_h_2.example.com [local_hop:vars] peers=automationcontroller [remote_multi_hop] aap_h_3 peers=local_hop",
"[automationcontroller] controller-[1:2].example.com [execution_nodes] execution-[1:5].example.com connection is established *from* the execution nodes *to* the automationcontroller peers=automationcontroller"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_automation_mesh_guide/design-patterns |
2.6.2. Useful Websites | 2.6.2. Useful Websites http://people.redhat.com/alikins/system_tuning.html -- System Tuning Info for Linux Servers. A stream-of-consciousness approach to performance tuning and resource monitoring for servers. http://www.linuxjournal.com/article.php?sid=2396 -- Performance Monitoring Tools for Linux. This Linux Journal page is geared more toward the administrator interested in writing a customized performance graphing solution. Written several years ago, some of the details may no longer apply, but the overall concept and execution are sound. http://oprofile.sourceforge.net/ -- OProfile project website. Includes valuable OProfile resources, including pointers to mailing lists and the #oprofile IRC channel. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-resource-addres-web |
Chapter 22. KIE Server Java client API for KIE containers and business assets | Chapter 22. KIE Server Java client API for KIE containers and business assets Red Hat Decision Manager provides a KIE Server Java client API that enables you to connect to KIE Server using REST protocol from your Java client application. You can use the KIE Server Java client API as an alternative to the KIE Server REST API to interact with your KIE containers and business assets (such as business rules, processes, and solvers) in Red Hat Decision Manager without using the Business Central user interface. This API support enables you to maintain your Red Hat Decision Manager resources more efficiently and optimize your integration and development with Red Hat Decision Manager. With the KIE Server Java client API, you can perform the following actions also supported by the KIE Server REST API: Deploy or dispose KIE containers Retrieve and update KIE container information Return KIE Server status and basic information Retrieve and update business asset information Execute business assets (such as rules and processes) KIE Server Java client API requests require the following components: Authentication The KIE Server Java client API requires HTTP Basic authentication for the user role kie-server . To view configured user roles for your Red Hat Decision Manager distribution, navigate to ~/USDSERVER_HOME/standalone/configuration/application-roles.properties and ~/application-users.properties . To add a user with the kie-server role, navigate to ~/USDSERVER_HOME/bin and run the following command: USD ./bin/jboss-cli.sh --commands="embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=['kie-server'])" For more information about user roles and Red Hat Decision Manager installation options, see Planning a Red Hat Decision Manager installation . Project dependencies The KIE Server Java client API requires the following dependencies on the relevant classpath of your Java project: <!-- For remote execution on KIE Server --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- For runtime commands --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <scope>runtime</scope> <version>USD{rhpam.version}</version> </dependency> <!-- For debug logging (optional) --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency> The <version> for Red Hat Decision Manager dependencies is the Maven artifact version for Red Hat Decision Manager currently used in your project (for example, 7.67.0.Final-redhat-00024). Note Instead of specifying a Red Hat Decision Manager <version> for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. Example BOM dependency: <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency> For more information about the Red Hat Business Automation BOM, see What is the mapping between RHDM product and maven library version? . Client request configuration All Java client requests with the KIE Server Java client API must define at least the following server communication components: Credentials of the kie-server user KIE Server location, such as http://localhost:8080/kie-server/services/rest/server Marshalling format for API requests and responses (JSON, JAXB, or XSTREAM) A KieServicesConfiguration object and a KieServicesClient object, which serve as the entry point for starting the server communication using the Java client API A KieServicesFactory object defining REST protocol and user access Any other client services used, such as RuleServicesClient , ProcessServicesClient , or QueryServicesClient The following are examples of basic and advanced client configurations with these components: Basic client configuration example import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.client.KieServicesClient; import org.kie.server.client.KieServicesConfiguration; import org.kie.server.client.KieServicesFactory; public class MyConfigurationObject { private static final String URL = "http://localhost:8080/kie-server/services/rest/server"; private static final String USER = "baAdmin"; private static final String PASSWORD = "password@1"; private static final MarshallingFormat FORMAT = MarshallingFormat.JSON; private static KieServicesConfiguration conf; private static KieServicesClient kieServicesClient; public static void initialize() { conf = KieServicesFactory.newRestConfiguration(URL, USER, PASSWORD); //If you use custom classes, such as Obj.class, add them to the configuration. Set<Class<?>> extraClassList = new HashSet<Class<?>>(); extraClassList.add(Obj.class); conf.addExtraClasses(extraClassList); conf.setMarshallingFormat(FORMAT); kieServicesClient = KieServicesFactory.newKieServicesClient(conf); } } Advanced client configuration example with additional client services import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.client.CaseServicesClient; import org.kie.server.client.DMNServicesClient; import org.kie.server.client.DocumentServicesClient; import org.kie.server.client.JobServicesClient; import org.kie.server.client.KieServicesClient; import org.kie.server.client.KieServicesConfiguration; import org.kie.server.client.KieServicesFactory; import org.kie.server.client.ProcessServicesClient; import org.kie.server.client.QueryServicesClient; import org.kie.server.client.RuleServicesClient; import org.kie.server.client.SolverServicesClient; import org.kie.server.client.UIServicesClient; import org.kie.server.client.UserTaskServicesClient; import org.kie.server.api.model.instance.ProcessInstance; import org.kie.server.api.model.KieContainerResource; import org.kie.server.api.model.ReleaseId; public class MyAdvancedConfigurationObject { // REST API base URL, credentials, and marshalling format private static final String URL = "http://localhost:8080/kie-server/services/rest/server"; private static final String USER = "baAdmin"; private static final String PASSWORD = "password@1";; private static final MarshallingFormat FORMAT = MarshallingFormat.JSON; private static KieServicesConfiguration conf; // KIE client for common operations private static KieServicesClient kieServicesClient; // Rules client private static RuleServicesClient ruleClient; // Process automation clients private static CaseServicesClient caseClient; private static DocumentServicesClient documentClient; private static JobServicesClient jobClient; private static ProcessServicesClient processClient; private static QueryServicesClient queryClient; private static UIServicesClient uiClient; private static UserTaskServicesClient userTaskClient; // DMN client private static DMNServicesClient dmnClient; // Planning client private static SolverServicesClient solverClient; public static void main(String[] args) { initializeKieServerClient(); initializeDroolsServiceClients(); initializeJbpmServiceClients(); initializeSolverServiceClients(); } public static void initializeKieServerClient() { conf = KieServicesFactory.newRestConfiguration(URL, USER, PASSWORD); conf.setMarshallingFormat(FORMAT); kieServicesClient = KieServicesFactory.newKieServicesClient(conf); } public static void initializeDroolsServiceClients() { ruleClient = kieServicesClient.getServicesClient(RuleServicesClient.class); dmnClient = kieServicesClient.getServicesClient(DMNServicesClient.class); } public static void initializeJbpmServiceClients() { caseClient = kieServicesClient.getServicesClient(CaseServicesClient.class); documentClient = kieServicesClient.getServicesClient(DocumentServicesClient.class); jobClient = kieServicesClient.getServicesClient(JobServicesClient.class); processClient = kieServicesClient.getServicesClient(ProcessServicesClient.class); queryClient = kieServicesClient.getServicesClient(QueryServicesClient.class); uiClient = kieServicesClient.getServicesClient(UIServicesClient.class); userTaskClient = kieServicesClient.getServicesClient(UserTaskServicesClient.class); } public static void initializeSolverServiceClients() { solverClient = kieServicesClient.getServicesClient(SolverServicesClient.class); } } 22.1. Sending requests with the KIE Server Java client API The KIE Server Java client API enables you to connect to KIE Server using REST protocol from your Java client application. You can use the KIE Server Java client API as an alternative to the KIE Server REST API to interact with your KIE containers and business assets (such as business rules, processes, and solvers) in Red Hat Decision Manager without using the Business Central user interface. Prerequisites KIE Server is installed and running. You have kie-server user role access to KIE Server. You have a Java project with Red Hat Decision Manager resources. Procedure In your client application, ensure that the following dependencies have been added to the relevant classpath of your Java project: <!-- For remote execution on KIE Server --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- For runtime commands --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <scope>runtime</scope> <version>USD{rhpam.version}</version> </dependency> <!-- For debug logging (optional) --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency> Download the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/kie-server-parent/kie-server-remote/kie-server-client/src/main/java/org/kie/server/client to access the KIE Server Java clients. In the ~/kie/server/client folder , identify the relevant Java client for the request you want to send, such as KieServicesClient to access client services for KIE containers and other assets in KIE Server. In your client application, create a .java class for the API request. The class must contain the necessary imports, KIE Server location and user credentials, a KieServicesClient object, and the client method to execute, such as createContainer and disposeContainer from the KieServicesClient client. Adjust any configuration details according to your use case. Creating and disposing a container import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.client.KieServicesClient; import org.kie.server.client.KieServicesConfiguration; import org.kie.server.client.KieServicesFactory; import org.kie.server.api.model.KieContainerResource; import org.kie.server.api.model.ServiceResponse; public class MyConfigurationObject { private static final String URL = "http://localhost:8080/kie-server/services/rest/server"; private static final String USER = "baAdmin"; private static final String PASSWORD = "password@1"; private static final MarshallingFormat FORMAT = MarshallingFormat.JSON; private static KieServicesConfiguration conf; private static KieServicesClient kieServicesClient; public static void initialize() { conf = KieServicesFactory.newRestConfiguration(URL, USER, PASSWORD); public void disposeAndCreateContainer() { System.out.println("== Disposing and creating containers =="); // Retrieve list of KIE containers List<KieContainerResource> kieContainers = kieServicesClient.listContainers().getResult().getContainers(); if (kieContainers.size() == 0) { System.out.println("No containers available..."); return; } // Dispose KIE container KieContainerResource container = kieContainers.get(0); String containerId = container.getContainerId(); ServiceResponse<Void> responseDispose = kieServicesClient.disposeContainer(containerId); if (responseDispose.getType() == ResponseType.FAILURE) { System.out.println("Error disposing " + containerId + ". Message: "); System.out.println(responseDispose.getMsg()); return; } System.out.println("Success Disposing container " + containerId); System.out.println("Trying to recreate the container..."); // Re-create KIE container ServiceResponse<KieContainerResource> createResponse = kieServicesClient.createContainer(containerId, container); if(createResponse.getType() == ResponseType.FAILURE) { System.out.println("Error creating " + containerId + ". Message: "); System.out.println(responseDispose.getMsg()); return; } System.out.println("Container recreated with success!"); } } } You define service responses using the org.kie.server.api.model.ServiceResponse<T> object, where T represents the type of returned response. The ServiceResponse object has the following attributes: String message : Returns the response message ResponseType type : Returns either SUCCESS or FAILURE T result : Returns the requested object In this example, when you dispose a container, the ServiceResponse returns a Void response. When you create a container, the ServiceResponse returns a KieContainerResource object. Note A conversation between a client and a specific KIE Server container in a clustered environment is secured by a unique conversationID . The conversationID is transferred using the X-KIE-ConversationId REST header. If you update the container, unset the conversationID . Use KieServiesClient.completeConversation() to unset the conversationID for Java API. Run the configured .java class from your project directory to execute the request, and review the KIE Server response. If you enabled debug logging, KIE Server responds with a detailed response according to your configured marshalling format, such as JSON. Example server response for a new KIE container (log): If you encounter request errors, review the returned error code messages and adjust your Java configurations accordingly. 22.2. Supported KIE Server Java clients The following are some of the Java client services available in the org.kie.server.client package of your Red Hat Decision Manager distribution. You can use these services to interact with related resources in KIE Server similarly to the KIE Server REST API. KieServicesClient : Used as the entry point for other KIE Server Java clients, and used to interact with KIE containers JobServicesClient : Used to schedule, cancel, re-queue, and get job requests RuleServicesClient : Used to send commands to the server to perform rule-related operations, such as executing rules or inserting objects into the KIE session SolverServicesClient : Used to perform all Red Hat build of OptaPlanner operations, such as getting the solver state and the best solution, or disposing a solver The getServicesClient method provides access to any of these clients: RuleServicesClient rulesClient = kieServicesClient.getServicesClient(RuleServicesClient.class); For the full list of available KIE Server Java clients, download the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/kie-server-parent/kie-server-remote/kie-server-client/src/main/java/org/kie/server/client . 22.3. Example requests with the KIE Server Java client API The following are examples of KIE Server Java client API requests for basic interactions with KIE Server. For the full list of available KIE Server Java clients, download the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/kie-server-parent/kie-server-remote/kie-server-client/src/main/java/org/kie/server/client . Listing KIE Server capabilities You can use the org.kie.server.api.model.KieServerInfo object to identify server capabilities. The KieServicesClient client requires the server capability information to correctly produce service clients. You can specify the capabilities globally in KieServicesConfiguration ; otherwise they are automatically retrieved from KIE Server. Example request to return KIE Server capabilities public void listCapabilities() { KieServerInfo serverInfo = kieServicesClient.getServerInfo().getResult(); System.out.print("Server capabilities:"); for (String capability : serverInfo.getCapabilities()) { System.out.print(" " + capability); } System.out.println(); } Listing KIE containers in KIE Server KIE containers are represented by the org.kie.server.api.model.KieContainerResource object. The list of resources is represented by the org.kie.server.api.model.KieContainerResourceList object. Example request to return KIE containers from KIE Server public void listContainers() { KieContainerResourceList containersList = kieServicesClient.listContainers().getResult(); List<KieContainerResource> kieContainers = containersList.getContainers(); System.out.println("Available containers: "); for (KieContainerResource container : kieContainers) { System.out.println("\t" + container.getContainerId() + " (" + container.getReleaseId() + ")"); } } You can optionally filter the KIE container results using an instance of the org.kie.server.api.model.KieContainerResourceFilter class, which is passed to the org.kie.server.client.KieServicesClient.listContainers() method. Example request to return KIE containers by release ID and status public void listContainersWithFilter() { // Filter containers by releaseId "org.example:container:1.0.0.Final" and status FAILED KieContainerResourceFilter filter = new KieContainerResourceFilter.Builder() .releaseId("org.example", "container", "1.0.0.Final") .status(KieContainerStatus.FAILED) .build(); // Using previously created KieServicesClient KieContainerResourceList containersList = kieServicesClient.listContainers(filter).getResult(); List<KieContainerResource> kieContainers = containersList.getContainers(); System.out.println("Available containers: "); for (KieContainerResource container : kieContainers) { System.out.println("\t" + container.getContainerId() + " (" + container.getReleaseId() + ")"); } } Creating and disposing KIE containers in KIE Server You can use the createContainer and disposeContainer methods in the KieServicesClient client to dispose and create KIE containers. In this example, when you dispose a container, the ServiceResponse returns a Void response. When you create a container, the ServiceResponse returns a KieContainerResource object. Example request to dispose and re-create a KIE container public void disposeAndCreateContainer() { System.out.println("== Disposing and creating containers =="); // Retrieve list of KIE containers List<KieContainerResource> kieContainers = kieServicesClient.listContainers().getResult().getContainers(); if (kieContainers.size() == 0) { System.out.println("No containers available..."); return; } // Dispose KIE container KieContainerResource container = kieContainers.get(0); String containerId = container.getContainerId(); ServiceResponse<Void> responseDispose = kieServicesClient.disposeContainer(containerId); if (responseDispose.getType() == ResponseType.FAILURE) { System.out.println("Error disposing " + containerId + ". Message: "); System.out.println(responseDispose.getMsg()); return; } System.out.println("Success Disposing container " + containerId); System.out.println("Trying to recreate the container..."); // Re-create KIE container ServiceResponse<KieContainerResource> createResponse = kieServicesClient.createContainer(containerId, container); if(createResponse.getType() == ResponseType.FAILURE) { System.out.println("Error creating " + containerId + ". Message: "); System.out.println(responseDispose.getMsg()); return; } System.out.println("Container recreated with success!"); } Executing runtime commands in KIE Server Red Hat Decision Manager supports runtime commands that you can send to KIE Server for asset-related operations, such as inserting or retracting objects in a KIE session or firing all rules. The full list of supported runtime commands is located in the org.drools.core.command.runtime package in your Red Hat Decision Manager instance. You can use the org.kie.api.command.KieCommands class to insert commands, and use org.kie.api.KieServices.get().getCommands() to instantiate the KieCommands class. If you want to add multiple commands, use the BatchExecutionCommand wrapper. Example request to insert an object and fire all rules import org.kie.api.command.Command; import org.kie.api.command.KieCommands; import org.kie.server.api.model.ServiceResponse; import org.kie.server.client.RuleServicesClient; import org.kie.server.client.KieServicesClient; import org.kie.api.KieServices; import java.util.Arrays; ... public void executeCommands() { String containerId = "hello"; System.out.println("== Sending commands to the server =="); RuleServicesClient rulesClient = kieServicesClient.getServicesClient(RuleServicesClient.class); KieCommands commandsFactory = KieServices.Factory.get().getCommands(); Command<?> insert = commandsFactory.newInsert("Some String OBJ"); Command<?> fireAllRules = commandsFactory.newFireAllRules(); Command<?> batchCommand = commandsFactory.newBatchExecution(Arrays.asList(insert, fireAllRules)); ServiceResponse<String> executeResponse = rulesClient.executeCommands(containerId, batchCommand); if(executeResponse.getType() == ResponseType.SUCCESS) { System.out.println("Commands executed with success! Response: "); System.out.println(executeResponse.getResult()); } else { System.out.println("Error executing rules. Message: "); System.out.println(executeResponse.getMsg()); } } Note A conversation between a client and a specific KIE Server container in a clustered environment is secured by a unique conversationID . The conversationID is transferred using the X-KIE-ConversationId REST header. If you update the container, unset the conversationID . Use KieServiesClient.completeConversation() to unset the conversationID for Java API. | [
"./bin/jboss-cli.sh --commands=\"embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=['kie-server'])\"",
"<!-- For remote execution on KIE Server --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- For runtime commands --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <scope>runtime</scope> <version>USD{rhpam.version}</version> </dependency> <!-- For debug logging (optional) --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency>",
"<dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency>",
"import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.client.KieServicesClient; import org.kie.server.client.KieServicesConfiguration; import org.kie.server.client.KieServicesFactory; public class MyConfigurationObject { private static final String URL = \"http://localhost:8080/kie-server/services/rest/server\"; private static final String USER = \"baAdmin\"; private static final String PASSWORD = \"password@1\"; private static final MarshallingFormat FORMAT = MarshallingFormat.JSON; private static KieServicesConfiguration conf; private static KieServicesClient kieServicesClient; public static void initialize() { conf = KieServicesFactory.newRestConfiguration(URL, USER, PASSWORD); //If you use custom classes, such as Obj.class, add them to the configuration. Set<Class<?>> extraClassList = new HashSet<Class<?>>(); extraClassList.add(Obj.class); conf.addExtraClasses(extraClassList); conf.setMarshallingFormat(FORMAT); kieServicesClient = KieServicesFactory.newKieServicesClient(conf); } }",
"import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.client.CaseServicesClient; import org.kie.server.client.DMNServicesClient; import org.kie.server.client.DocumentServicesClient; import org.kie.server.client.JobServicesClient; import org.kie.server.client.KieServicesClient; import org.kie.server.client.KieServicesConfiguration; import org.kie.server.client.KieServicesFactory; import org.kie.server.client.ProcessServicesClient; import org.kie.server.client.QueryServicesClient; import org.kie.server.client.RuleServicesClient; import org.kie.server.client.SolverServicesClient; import org.kie.server.client.UIServicesClient; import org.kie.server.client.UserTaskServicesClient; import org.kie.server.api.model.instance.ProcessInstance; import org.kie.server.api.model.KieContainerResource; import org.kie.server.api.model.ReleaseId; public class MyAdvancedConfigurationObject { // REST API base URL, credentials, and marshalling format private static final String URL = \"http://localhost:8080/kie-server/services/rest/server\"; private static final String USER = \"baAdmin\"; private static final String PASSWORD = \"password@1\";; private static final MarshallingFormat FORMAT = MarshallingFormat.JSON; private static KieServicesConfiguration conf; // KIE client for common operations private static KieServicesClient kieServicesClient; // Rules client private static RuleServicesClient ruleClient; // Process automation clients private static CaseServicesClient caseClient; private static DocumentServicesClient documentClient; private static JobServicesClient jobClient; private static ProcessServicesClient processClient; private static QueryServicesClient queryClient; private static UIServicesClient uiClient; private static UserTaskServicesClient userTaskClient; // DMN client private static DMNServicesClient dmnClient; // Planning client private static SolverServicesClient solverClient; public static void main(String[] args) { initializeKieServerClient(); initializeDroolsServiceClients(); initializeJbpmServiceClients(); initializeSolverServiceClients(); } public static void initializeKieServerClient() { conf = KieServicesFactory.newRestConfiguration(URL, USER, PASSWORD); conf.setMarshallingFormat(FORMAT); kieServicesClient = KieServicesFactory.newKieServicesClient(conf); } public static void initializeDroolsServiceClients() { ruleClient = kieServicesClient.getServicesClient(RuleServicesClient.class); dmnClient = kieServicesClient.getServicesClient(DMNServicesClient.class); } public static void initializeJbpmServiceClients() { caseClient = kieServicesClient.getServicesClient(CaseServicesClient.class); documentClient = kieServicesClient.getServicesClient(DocumentServicesClient.class); jobClient = kieServicesClient.getServicesClient(JobServicesClient.class); processClient = kieServicesClient.getServicesClient(ProcessServicesClient.class); queryClient = kieServicesClient.getServicesClient(QueryServicesClient.class); uiClient = kieServicesClient.getServicesClient(UIServicesClient.class); userTaskClient = kieServicesClient.getServicesClient(UserTaskServicesClient.class); } public static void initializeSolverServiceClients() { solverClient = kieServicesClient.getServicesClient(SolverServicesClient.class); } }",
"<!-- For remote execution on KIE Server --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- For runtime commands --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> <scope>runtime</scope> <version>USD{rhpam.version}</version> </dependency> <!-- For debug logging (optional) --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency>",
"import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.client.KieServicesClient; import org.kie.server.client.KieServicesConfiguration; import org.kie.server.client.KieServicesFactory; import org.kie.server.api.model.KieContainerResource; import org.kie.server.api.model.ServiceResponse; public class MyConfigurationObject { private static final String URL = \"http://localhost:8080/kie-server/services/rest/server\"; private static final String USER = \"baAdmin\"; private static final String PASSWORD = \"password@1\"; private static final MarshallingFormat FORMAT = MarshallingFormat.JSON; private static KieServicesConfiguration conf; private static KieServicesClient kieServicesClient; public static void initialize() { conf = KieServicesFactory.newRestConfiguration(URL, USER, PASSWORD); public void disposeAndCreateContainer() { System.out.println(\"== Disposing and creating containers ==\"); // Retrieve list of KIE containers List<KieContainerResource> kieContainers = kieServicesClient.listContainers().getResult().getContainers(); if (kieContainers.size() == 0) { System.out.println(\"No containers available...\"); return; } // Dispose KIE container KieContainerResource container = kieContainers.get(0); String containerId = container.getContainerId(); ServiceResponse<Void> responseDispose = kieServicesClient.disposeContainer(containerId); if (responseDispose.getType() == ResponseType.FAILURE) { System.out.println(\"Error disposing \" + containerId + \". Message: \"); System.out.println(responseDispose.getMsg()); return; } System.out.println(\"Success Disposing container \" + containerId); System.out.println(\"Trying to recreate the container...\"); // Re-create KIE container ServiceResponse<KieContainerResource> createResponse = kieServicesClient.createContainer(containerId, container); if(createResponse.getType() == ResponseType.FAILURE) { System.out.println(\"Error creating \" + containerId + \". Message: \"); System.out.println(responseDispose.getMsg()); return; } System.out.println(\"Container recreated with success!\"); } } }",
"10:23:35.194 [main] INFO o.k.s.a.m.MarshallerFactory - Marshaller extensions init 10:23:35.396 [main] DEBUG o.k.s.client.balancer.LoadBalancer - Load balancer RoundRobinBalancerStrategy{availableEndpoints=[http://localhost:8080/kie-server/services/rest/server]} selected url 'http://localhost:8080/kie-server/services/rest/server' 10:23:35.398 [main] DEBUG o.k.s.c.i.AbstractKieServicesClientImpl - About to send GET request to 'http://localhost:8080/kie-server/services/rest/server' 10:23:35.440 [main] DEBUG o.k.s.c.i.AbstractKieServicesClientImpl - About to deserialize content: '{ \"type\" : \"SUCCESS\", \"msg\" : \"Kie Server info\", \"result\" : { \"kie-server-info\" : { \"id\" : \"default-kieserver\", \"version\" : \"7.11.0.Final-redhat-00003\", \"name\" : \"default-kieserver\", \"location\" : \"http://localhost:8080/kie-server/services/rest/server\", \"capabilities\" : [ \"KieServer\", \"BRM\", \"BPM\", \"CaseMgmt\", \"BPM-UI\", \"BRP\", \"DMN\", \"Swagger\" ], \"messages\" : [ { \"severity\" : \"INFO\", \"timestamp\" : { \"java.util.Date\" : 1540814906533 }, \"content\" : [ \"Server KieServerInfo{serverId='default-kieserver', version='7.11.0.Final-redhat-00003', name='default-kieserver', location='http://localhost:8080/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger], messages=null}started successfully at Mon Oct 29 08:08:26 EDT 2018\" ] } ] } } }' into type: 'class org.kie.server.api.model.ServiceResponse' 10:23:35.653 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - KieServicesClient connected to: default-kieserver version 7.11.0.Final-redhat-00003 10:23:35.653 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Supported capabilities by the server: [KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger] 10:23:35.653 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Building services client for server capability KieServer 10:23:35.653 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - No builder found for 'KieServer' capability 10:23:35.654 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Building services client for server capability BRM 10:23:35.654 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Builder 'org.kie.server.client.helper.DroolsServicesClientBuilder@6b927fb' for capability 'BRM' 10:23:35.655 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Capability implemented by {interface org.kie.server.client.RuleServicesClient=org.kie.server.client.impl.RuleServicesClientImpl@4a94ee4} 10:23:35.655 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Building services client for server capability BPM 10:23:35.656 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Builder 'org.kie.server.client.helper.JBPMServicesClientBuilder@4cc451f2' for capability 'BPM' 10:23:35.672 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Capability implemented by {interface org.kie.server.client.JobServicesClient=org.kie.server.client.impl.JobServicesClientImpl@1189dd52, interface org.kie.server.client.admin.ProcessAdminServicesClient=org.kie.server.client.admin.impl.ProcessAdminServicesClientImpl@36bc55de, interface org.kie.server.client.DocumentServicesClient=org.kie.server.client.impl.DocumentServicesClientImpl@564fabc8, interface org.kie.server.client.admin.UserTaskAdminServicesClient=org.kie.server.client.admin.impl.UserTaskAdminServicesClientImpl@16d04d3d, interface org.kie.server.client.QueryServicesClient=org.kie.server.client.impl.QueryServicesClientImpl@49ec71f8, interface org.kie.server.client.ProcessServicesClient=org.kie.server.client.impl.ProcessServicesClientImpl@1d2adfbe, interface org.kie.server.client.UserTaskServicesClient=org.kie.server.client.impl.UserTaskServicesClientImpl@36902638} 10:23:35.672 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Building services client for server capability CaseMgmt 10:23:35.672 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Builder 'org.kie.server.client.helper.CaseServicesClientBuilder@223d2c72' for capability 'CaseMgmt' 10:23:35.676 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Capability implemented by {interface org.kie.server.client.admin.CaseAdminServicesClient=org.kie.server.client.admin.impl.CaseAdminServicesClientImpl@2b662a77, interface org.kie.server.client.CaseServicesClient=org.kie.server.client.impl.CaseServicesClientImpl@7f0eb4b4} 10:23:35.676 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Building services client for server capability BPM-UI 10:23:35.676 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Builder 'org.kie.server.client.helper.JBPMUIServicesClientBuilder@5c33f1a9' for capability 'BPM-UI' 10:23:35.677 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Capability implemented by {interface org.kie.server.client.UIServicesClient=org.kie.server.client.impl.UIServicesClientImpl@223191a6} 10:23:35.678 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Building services client for server capability BRP 10:23:35.678 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Builder 'org.kie.server.client.helper.OptaplannerServicesClientBuilder@49139829' for capability 'BRP' 10:23:35.679 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Capability implemented by {interface org.kie.server.client.SolverServicesClient=org.kie.server.client.impl.SolverServicesClientImpl@77fbd92c} 10:23:35.679 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Building services client for server capability DMN 10:23:35.679 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Builder 'org.kie.server.client.helper.DMNServicesClientBuilder@67c27493' for capability 'DMN' 10:23:35.680 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Capability implemented by {interface org.kie.server.client.DMNServicesClient=org.kie.server.client.impl.DMNServicesClientImpl@35e2d654} 10:23:35.680 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - Building services client for server capability Swagger 10:23:35.680 [main] DEBUG o.k.s.c.impl.KieServicesClientImpl - No builder found for 'Swagger' capability 10:23:35.681 [main] DEBUG o.k.s.client.balancer.LoadBalancer - Load balancer RoundRobinBalancerStrategy{availableEndpoints=[http://localhost:8080/kie-server/services/rest/server]} selected url 'http://localhost:8080/kie-server/services/rest/server' 10:23:35.701 [main] DEBUG o.k.s.c.i.AbstractKieServicesClientImpl - About to send PUT request to 'http://localhost:8080/kie-server/services/rest/server/containers/employee-rostering3' with payload '{ \"container-id\" : null, \"release-id\" : { \"group-id\" : \"employeerostering\", \"artifact-id\" : \"employeerostering\", \"version\" : \"1.0.0-SNAPSHOT\" }, \"resolved-release-id\" : null, \"status\" : null, \"scanner\" : null, \"config-items\" : [ ], \"messages\" : [ ], \"container-alias\" : null }' 10:23:38.071 [main] DEBUG o.k.s.c.i.AbstractKieServicesClientImpl - About to deserialize content: '{ \"type\" : \"SUCCESS\", \"msg\" : \"Container employee-rostering3 successfully deployed with module employeerostering:employeerostering:1.0.0-SNAPSHOT.\", \"result\" : { \"kie-container\" : { \"container-id\" : \"employee-rostering3\", \"release-id\" : { \"group-id\" : \"employeerostering\", \"artifact-id\" : \"employeerostering\", \"version\" : \"1.0.0-SNAPSHOT\" }, \"resolved-release-id\" : { \"group-id\" : \"employeerostering\", \"artifact-id\" : \"employeerostering\", \"version\" : \"1.0.0-SNAPSHOT\" }, \"status\" : \"STARTED\", \"scanner\" : { \"status\" : \"DISPOSED\", \"poll-interval\" : null }, \"config-items\" : [ ], \"messages\" : [ { \"severity\" : \"INFO\", \"timestamp\" : { \"java.util.Date\" : 1540909418069 }, \"content\" : [ \"Container employee-rostering3 successfully created with module employeerostering:employeerostering:1.0.0-SNAPSHOT.\" ] } ], \"container-alias\" : null } } }' into type: 'class org.kie.server.api.model.ServiceResponse'",
"RuleServicesClient rulesClient = kieServicesClient.getServicesClient(RuleServicesClient.class);",
"public void listCapabilities() { KieServerInfo serverInfo = kieServicesClient.getServerInfo().getResult(); System.out.print(\"Server capabilities:\"); for (String capability : serverInfo.getCapabilities()) { System.out.print(\" \" + capability); } System.out.println(); }",
"public void listContainers() { KieContainerResourceList containersList = kieServicesClient.listContainers().getResult(); List<KieContainerResource> kieContainers = containersList.getContainers(); System.out.println(\"Available containers: \"); for (KieContainerResource container : kieContainers) { System.out.println(\"\\t\" + container.getContainerId() + \" (\" + container.getReleaseId() + \")\"); } }",
"public void listContainersWithFilter() { // Filter containers by releaseId \"org.example:container:1.0.0.Final\" and status FAILED KieContainerResourceFilter filter = new KieContainerResourceFilter.Builder() .releaseId(\"org.example\", \"container\", \"1.0.0.Final\") .status(KieContainerStatus.FAILED) .build(); // Using previously created KieServicesClient KieContainerResourceList containersList = kieServicesClient.listContainers(filter).getResult(); List<KieContainerResource> kieContainers = containersList.getContainers(); System.out.println(\"Available containers: \"); for (KieContainerResource container : kieContainers) { System.out.println(\"\\t\" + container.getContainerId() + \" (\" + container.getReleaseId() + \")\"); } }",
"public void disposeAndCreateContainer() { System.out.println(\"== Disposing and creating containers ==\"); // Retrieve list of KIE containers List<KieContainerResource> kieContainers = kieServicesClient.listContainers().getResult().getContainers(); if (kieContainers.size() == 0) { System.out.println(\"No containers available...\"); return; } // Dispose KIE container KieContainerResource container = kieContainers.get(0); String containerId = container.getContainerId(); ServiceResponse<Void> responseDispose = kieServicesClient.disposeContainer(containerId); if (responseDispose.getType() == ResponseType.FAILURE) { System.out.println(\"Error disposing \" + containerId + \". Message: \"); System.out.println(responseDispose.getMsg()); return; } System.out.println(\"Success Disposing container \" + containerId); System.out.println(\"Trying to recreate the container...\"); // Re-create KIE container ServiceResponse<KieContainerResource> createResponse = kieServicesClient.createContainer(containerId, container); if(createResponse.getType() == ResponseType.FAILURE) { System.out.println(\"Error creating \" + containerId + \". Message: \"); System.out.println(responseDispose.getMsg()); return; } System.out.println(\"Container recreated with success!\"); }",
"import org.kie.api.command.Command; import org.kie.api.command.KieCommands; import org.kie.server.api.model.ServiceResponse; import org.kie.server.client.RuleServicesClient; import org.kie.server.client.KieServicesClient; import org.kie.api.KieServices; import java.util.Arrays; public void executeCommands() { String containerId = \"hello\"; System.out.println(\"== Sending commands to the server ==\"); RuleServicesClient rulesClient = kieServicesClient.getServicesClient(RuleServicesClient.class); KieCommands commandsFactory = KieServices.Factory.get().getCommands(); Command<?> insert = commandsFactory.newInsert(\"Some String OBJ\"); Command<?> fireAllRules = commandsFactory.newFireAllRules(); Command<?> batchCommand = commandsFactory.newBatchExecution(Arrays.asList(insert, fireAllRules)); ServiceResponse<String> executeResponse = rulesClient.executeCommands(containerId, batchCommand); if(executeResponse.getType() == ResponseType.SUCCESS) { System.out.println(\"Commands executed with success! Response: \"); System.out.println(executeResponse.getResult()); } else { System.out.println(\"Error executing rules. Message: \"); System.out.println(executeResponse.getMsg()); } }"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/kie-server-java-api-con_kie-apis |
2.2. Starting the Virtual Machine | 2.2. Starting the Virtual Machine 2.2.1. Starting a Virtual Machine Procedure Click Compute Virtual Machines and select a virtual machine with a status of Down . Click Run . The Status of the virtual machine changes to Up , and the operating system installation begins. Open a console to the virtual machine if one does not open automatically. Note A virtual machine will not start on a host with an overloaded CPU. By default, a host's CPU is considered overloaded if it has a load of more than 80% for 5 minutes, but these values can be changed using scheduling policies. See Scheduling Policies in the Administration Guide for more information. Troubleshooting Scenario - the virtual machine fails to boot with the following error message: Boot failed: not a bootable disk - No Bootable device Possible solutions to this problem: Make sure that hard disk is selected in the boot sequence, and the disk that the virtual machine is booting from must be set as Bootable . Create a Cloned Virtual Machine Based on a Template . Create a new virtual machine with a local boot disk managed by RHV that contains the OS and application binaries. Install the OS by booting from the Network (PXE) boot option. Scenario - the virtual machine on IBM POWER9 fails to boot with the following error message: qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off Default risk level protections can prevent VMs from starting on IBM POWER9. To resolve this issue: Create or edit the /var/lib/obmc/cfam_overrides on the BMC. Set the firmware risk level to 0 : Reboot the host system for the changes to take affect. Note Overriding the risk level can cause unexpected behavior when running virtual machines. 2.2.2. Opening a console to a virtual machine Use Remote Viewer to connect to a virtual machine. Note To allow other users to connect to the VM, make sure you shutdown and restart the virtual machine when you are finished using the console. Alternatively, the administrator can Disable strict user checking to eliminate the need for reboot between users. See Virtual Machine Console Settings Explained for more information. Procedure Install Remote Viewer if it is not already installed. See Installing Console Components . Click Compute Virtual Machines and select a virtual machine. Click Console . By default, the browser prompts you to download a file named console.vv . When you click to open the file, a console window opens for the virtual machine. You can configure your browser to automatically open these files, such that clicking Console simply opens the console. Note console.vv expires after 120 seconds. If more than 120 seconds elapse between the time the file is downloaded and the time that you open the file, click Console again. Additional resources Automatically connecting to a Virtual Machine Configuring Console Options 2.2.3. Opening a Serial Console to a Virtual Machine You can access a virtual machine's serial console from the command line instead of opening a console from the Administration Portal or the VM Portal. The serial console is emulated through VirtIO channels, using SSH and key pairs. The Manager acts as a proxy for the connection, provides information about virtual machine placement, and stores the authentication keys. You can add public keys for each user from either the Administration Portal or the VM Portal. You can access serial consoles for only those virtual machines for which you have appropriate permissions. Important To access the serial console of a virtual machine, the user must have UserVmManager , SuperUser , or UserInstanceManager permission on that virtual machine. These permissions must be explicitly defined for each user. It is not enough to assign these permissions to Everyone . The serial console is accessed through TCP port 2222 on the Manager. This port is opened during engine-setup on new installations. To change the port, see ovirt-vmconsole/README.md . You must configure the following firewall rules to allow a serial console: Rule "M3" for the Manager firewall Rule "H2" for the host firewall The serial console relies on the ovirt-vmconsole package and the ovirt-vmconsole-proxy on the Manager and the ovirt-vmconsole package and the ovirt-vmconsole-host package on the hosts. These packages are installed by default on new installations. To install the packages on existing installations, reinstall the hosts . Enabling a Virtual Machine's Serial Console On the virtual machine whose serial console you are accessing, add the following lines to /etc/default/grub : GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0,115200n8" GRUB_TERMINAL="console serial" GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1" Note GRUB_CMDLINE_LINUX_DEFAULT applies this configuration only to the default menu entry. Use GRUB_CMDLINE_LINUX to apply the configuration to all the menu entries. If these lines already exist in /etc/default/grub , update them. Do not duplicate them. Rebuild /boot/grub2/grub.cfg : BIOS-based machines: # grub2-mkconfig -o /boot/grub2/grub.cfg UEFI-based machines: # grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg See GRUB 2 over a Serial Console in the Red Hat Enterprise Linux 7 System Administrator's Guide for details. On the client machine from which you are accessing the virtual machine serial console, generate an SSH key pair. The Manager supports standard SSH key types, for example, an RSA key: # ssh-keygen -t rsa -b 2048 -f .ssh/serialconsolekey This command generates a public key and a private key. In the Administration Portal, click Administration Account Settings or click the user icon on the header bar and click Account Settings to open the Account Settings screen. OR In the VM Portal, click the Settings icon on the header bar to open the Account Settings screen. In the User's Public Key text field (Administration Portal) or SSH Key field (VM Portal), paste the public key of the client machine that will be used to access the serial console. Click Compute Virtual Machines and select a virtual machine. Click Edit . In the Console tab of the Edit Virtual Machine window, select the Enable VirtIO serial console check box. Connecting to a Virtual Machine's Serial Console On the client machine, connect to the virtual machine's serial console: If a single virtual machine is available, this command connects the user to that virtual machine: # ssh -t -p 2222 ovirt-vmconsole@ Manager_FQDN -i .ssh/serialconsolekey Red Hat Enterprise Linux Server release 6.7 (Santiago) Kernel 2.6.32-573.3.1.el6.x86_64 on an x86_64 USER login: If more than one virtual machine is available, this command lists the available virtual machines and their IDs: # ssh -t -p 2222 ovirt-vmconsole@ Manager_FQDN -i .ssh/serialconsolekey list 1. vm1 [vmid1] 2. vm2 [vmid2] 3. vm3 [vmid3] > 2 Red Hat Enterprise Linux Server release 6.7 (Santiago) Kernel 2.6.32-573.3.1.el6.x86_64 on an x86_64 USER login: Enter the number of the machine to which you want to connect, and press Enter . Alternatively, connect directly to a virtual machine using its unique identifier or its name: # ssh -t -p 2222 ovirt-vmconsole@ Manager_FQDN connect --vm-id vmid1 # ssh -t -p 2222 ovirt-vmconsole@ Manager_FQDN connect --vm-name vm1 Disconnecting from a Virtual Machine's Serial Console Press any key followed by ~ . to close a serial console session. If the serial console session is disconnected abnormally, a TCP timeout occurs. You will be unable to reconnect to the virtual machine's serial console until the timeout period expires. 2.2.4. Automatically Connecting to a Virtual Machine Once you have logged in, you can automatically connect to a single running virtual machine. This can be configured in the VM Portal. Procedure In the Virtual Machines page, click the name of the virtual machine to go to the details view. Click the pencil icon beside Console and set Connect automatically to ON . The time you log into the VM Portal, if you have only one running virtual machine, you will automatically connect to that machine. | [
"Boot failed: not a bootable disk - No Bootable device",
"qemu-kvm: Requested count cache flush assist capability level not supported by kvm, try appending -machine cap-ccf-assist=off",
"Control speculative execution mode 0 0x283a 0x00000000 # bits 28:31 are used for init level -- in this case 0 Kernel and User protection (safest, default) 0 0x283F 0x20000000 # Indicate override register is valid",
"GRUB_CMDLINE_LINUX_DEFAULT=\"console=tty0 console=ttyS0,115200n8\" GRUB_TERMINAL=\"console serial\" GRUB_SERIAL_COMMAND=\"serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1\"",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg",
"ssh-keygen -t rsa -b 2048 -f .ssh/serialconsolekey",
"ssh -t -p 2222 ovirt-vmconsole@ Manager_FQDN -i .ssh/serialconsolekey Red Hat Enterprise Linux Server release 6.7 (Santiago) Kernel 2.6.32-573.3.1.el6.x86_64 on an x86_64 USER login:",
"ssh -t -p 2222 ovirt-vmconsole@ Manager_FQDN -i .ssh/serialconsolekey list 1. vm1 [vmid1] 2. vm2 [vmid2] 3. vm3 [vmid3] > 2 Red Hat Enterprise Linux Server release 6.7 (Santiago) Kernel 2.6.32-573.3.1.el6.x86_64 on an x86_64 USER login:",
"ssh -t -p 2222 ovirt-vmconsole@ Manager_FQDN connect --vm-id vmid1",
"ssh -t -p 2222 ovirt-vmconsole@ Manager_FQDN connect --vm-name vm1"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/starting_the_virtual_machine_linux_vm |
19.2. Configure Passivation | 19.2. Configure Passivation In Red Hat JBoss Data Grid's Remote Client-Server mode, add the passivation parameter to the cache store element to toggle passivation for it: Example 19.1. Toggle Passivation in Remote Client-Server Mode In Library mode, add the passivation parameter to the persistence element to toggle passivation: Example 19.2. Toggle Passivation in Library Mode Report a bug | [
"<local-cache name=\"customCache\"/> <!-- Additional configuration elements here --> <file-store passivation=\"true\" <!-- Additional configuration elements here --> /> <!-- Additional configuration elements here --> </local-cache>",
"<persistence passivation=\"true\"> <!-- Additional configuration elements here --> </persistence>"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/configure_passivation |
Chapter 7. Applying autoscaling to an OpenShift Container Platform cluster | Chapter 7. Applying autoscaling to an OpenShift Container Platform cluster Applying autoscaling to an OpenShift Container Platform cluster involves deploying a cluster autoscaler and then deploying machine autoscalers for each machine type in your cluster. Important You can configure the cluster autoscaler only in clusters where the Machine API Operator is operational. 7.1. About the cluster autoscaler The cluster autoscaler adjusts the size of an OpenShift Container Platform cluster to meet its current deployment needs. It uses declarative, Kubernetes-style arguments to provide infrastructure management that does not rely on objects of a specific cloud provider. The cluster autoscaler has a cluster scope, and is not associated with a particular namespace. The cluster autoscaler increases the size of the cluster when there are pods that fail to schedule on any of the current worker nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify. The cluster autoscaler computes the total memory, CPU, and GPU on all nodes the cluster, even though it does not manage the control plane nodes. These values are not single-machine oriented. They are an aggregation of all the resources in the entire cluster. For example, if you set the maximum memory resource limit, the cluster autoscaler includes all the nodes in the cluster when calculating the current memory usage. That calculation is then used to determine if the cluster autoscaler has the capacity to add more worker resources. Important Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition that you create is large enough to account for the total possible number of machines in your cluster. This value must encompass the number of control plane machines and the possible number of compute machines that you might scale to. Automatic node removal Every 10 seconds, the cluster autoscaler checks which nodes are unnecessary in the cluster and removes them. The cluster autoscaler considers a node for removal if the following conditions apply: The node utilization is less than the node utilization level threshold for the cluster. The node utilization level is the sum of the requested resources divided by the allocated resources for the node. If you do not specify a value in the ClusterAutoscaler custom resource, the cluster autoscaler uses a default value of 0.5 , which corresponds to 50% utilization. The cluster autoscaler can move all pods running on the node to the other nodes. The Kubernetes scheduler is responsible for scheduling pods on the nodes. The cluster autoscaler does not have scale down disabled annotation. If the following types of pods are present on a node, the cluster autoscaler will not remove the node: Pods with restrictive pod disruption budgets (PDBs). Kube-system pods that do not run on the node by default. Kube-system pods that do not have a PDB or have a PDB that is too restrictive. Pods that are not backed by a controller object such as a deployment, replica set, or stateful set. Pods with local storage. Pods that cannot be moved elsewhere because of a lack of resources, incompatible node selectors or affinity, matching anti-affinity, and so on. Unless they also have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "true" annotation, pods that have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" annotation. For example, you set the maximum CPU limit to 64 cores and configure the cluster autoscaler to only create machines that have 8 cores each. If your cluster starts with 30 cores, the cluster autoscaler can add up to 4 more nodes with 32 cores, for a total of 62. Limitations If you configure the cluster autoscaler, additional usage restrictions apply: Do not modify the nodes that are in autoscaled node groups directly. All nodes within the same node group have the same capacity and labels and run the same system pods. Specify requests for your pods. If you have to prevent pods from being deleted too quickly, configure appropriate PDBs. Confirm that your cloud provider quota is large enough to support the maximum node pools that you configure. Do not run additional node group autoscalers, especially the ones offered by your cloud provider. Note The cluster autoscaler only adds nodes in autoscaled node groups if doing so would result in a schedulable pod. If the available node types cannot meet the requirements for a pod request, or if the node groups that could meet these requirements are at their maximum size, the cluster autoscaler cannot scale up. Interaction with other scheduling features The horizontal pod autoscaler (HPA) and the cluster autoscaler modify cluster resources in different ways. The HPA changes the deployment's or replica set's number of replicas based on the current CPU load. If the load increases, the HPA creates new replicas, regardless of the amount of resources available to the cluster. If there are not enough resources, the cluster autoscaler adds resources so that the HPA-created pods can run. If the load decreases, the HPA stops some replicas. If this action causes some nodes to be underutilized or completely empty, the cluster autoscaler deletes the unnecessary nodes. The cluster autoscaler takes pod priorities into account. The Pod Priority and Preemption feature enables scheduling pods based on priorities if the cluster does not have enough resources, but the cluster autoscaler ensures that the cluster has resources to run all pods. To honor the intention of both features, the cluster autoscaler includes a priority cutoff function. You can use this cutoff to schedule "best-effort" pods, which do not cause the cluster autoscaler to increase resources but instead run only when spare resources are available. Pods with priority lower than the cutoff value do not cause the cluster to scale up or prevent the cluster from scaling down. No new nodes are added to run the pods, and nodes running these pods might be deleted to free resources. 7.1.1. Configuring the cluster autoscaler First, deploy the cluster autoscaler to manage automatic resource scaling in your OpenShift Container Platform cluster. Note Because the cluster autoscaler is scoped to the entire cluster, you can make only one cluster autoscaler for the cluster. 7.1.1.1. Cluster autoscaler resource definition This ClusterAutoscaler resource definition shows the parameters and sample values for the cluster autoscaler. Note When you change the configuration of an existing cluster autoscaler, it restarts. apiVersion: "autoscaling.openshift.io/v1" kind: "ClusterAutoscaler" metadata: name: "default" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: <gpu_type> 7 min: 0 8 max: 16 9 logVerbosity: 4 10 scaleDown: 11 enabled: true 12 delayAfterAdd: 10m 13 delayAfterDelete: 5m 14 delayAfterFailure: 30s 15 unneededTime: 5m 16 utilizationThreshold: "0.4" 17 expanders: ["Random"] 18 1 Specify the priority that a pod must exceed to cause the cluster autoscaler to deploy additional nodes. Enter a 32-bit integer value. The podPriorityThreshold value is compared to the value of the PriorityClass that you assign to each pod. 2 Specify the maximum number of nodes to deploy. This value is the total number of machines that are deployed in your cluster, not just the ones that the autoscaler controls. Ensure that this value is large enough to account for all of your control plane and compute machines and the total number of replicas that you specify in your MachineAutoscaler resources. 3 Specify the minimum number of cores to deploy in the cluster. 4 Specify the maximum number of cores to deploy in the cluster. 5 Specify the minimum amount of memory, in GiB, in the cluster. 6 Specify the maximum amount of memory, in GiB, in the cluster. 7 Optional: To configure the cluster autoscaler to deploy GPU-enabled nodes, specify a type value. This value must match the value of the spec.template.spec.metadata.labels[cluster-api/accelerator] label in the machine set that manages the GPU-enabled nodes of that type. For example, this value might be nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. For more information, see "Labeling GPU machine sets for the cluster autoscaler". 8 Specify the minimum number of GPUs of the specified type to deploy in the cluster. 9 Specify the maximum number of GPUs of the specified type to deploy in the cluster. 10 Specify the logging verbosity level between 0 and 10 . The following log level thresholds are provided for guidance: 1 : (Default) Basic information about changes. 4 : Debug-level verbosity for troubleshooting typical issues. 9 : Extensive, protocol-level debugging information. If you do not specify a value, the default value of 1 is used. 11 In this section, you can specify the period to wait for each action by using any valid ParseDuration interval, including ns , us , ms , s , m , and h . 12 Specify whether the cluster autoscaler can remove unnecessary nodes. 13 Optional: Specify the period to wait before deleting a node after a node has recently been added . If you do not specify a value, the default value of 10m is used. 14 Optional: Specify the period to wait before deleting a node after a node has recently been deleted . If you do not specify a value, the default value of 0s is used. 15 Optional: Specify the period to wait before deleting a node after a scale down failure occurred. If you do not specify a value, the default value of 3m is used. 16 Optional: Specify a period of time before an unnecessary node is eligible for deletion. If you do not specify a value, the default value of 10m is used. 17 Optional: Specify the node utilization level . Nodes below this utilization level are eligible for deletion. The node utilization level is the sum of the requested resources divided by the allocated resources for the node, and must be a value greater than "0" but less than "1" . If you do not specify a value, the cluster autoscaler uses a default value of "0.5" , which corresponds to 50% utilization. You must express this value as a string. 18 Optional: Specify any expanders that you want the cluster autoscaler to use. The following values are valid: LeastWaste : Selects the machine set that minimizes the idle CPU after scaling. If multiple machine sets would yield the same amount of idle CPU, the selection minimizes unused memory. Priority : Selects the machine set with the highest user-assigned priority. To use this expander, you must create a config map that defines the priority of your machine sets. For more information, see "Configuring a priority expander for the cluster autoscaler." Random : (Default) Selects the machine set randomly. If you do not specify a value, the default value of Random is used. You can specify multiple expanders by using the [LeastWaste, Priority] format. The cluster autoscaler applies each expander according to the specified order. In the [LeastWaste, Priority] example, the cluster autoscaler first evaluates according to the LeastWaste criteria. If more than one machine set satisfies the LeastWaste criteria equally well, the cluster autoscaler then evaluates according to the Priority criteria. If more than one machine set satisfies all of the specified expanders equally well, the cluster autoscaler selects one to use at random. Note When performing a scaling operation, the cluster autoscaler remains within the ranges set in the ClusterAutoscaler resource definition, such as the minimum and maximum number of cores to deploy or the amount of memory in the cluster. However, the cluster autoscaler does not correct the current values in your cluster to be within those ranges. The minimum and maximum CPUs, memory, and GPU values are determined by calculating those resources on all nodes in the cluster, even if the cluster autoscaler does not manage the nodes. For example, the control plane nodes are considered in the total memory in the cluster, even though the cluster autoscaler does not manage the control plane nodes. 7.1.1.2. Configuring a priority expander for the cluster autoscaler When the cluster autoscaler uses the priority expander, it scales up by using the machine set with the highest user-assigned priority. To use this expander, you must create a config map that defines the priority of your machine sets. For each specified priority level, you must create regular expressions to identify machine sets that you want to use when prioritizing a machine set for selection. The regular expressions must match the name of any compute machine set that you want the cluster autoscaler to consider for selection. Prerequisites You have deployed an OpenShift Container Platform cluster that uses the Machine API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure List the compute machine sets on your cluster by running the following command: USD oc get machinesets.machine.openshift.io Example output NAME DESIRED CURRENT READY AVAILABLE AGE archive-agl030519-vplxk-worker-us-east-1c 1 1 1 1 25m fast-01-agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m fast-02-agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m fast-03-agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m fast-04-agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m prod-01-agl030519-vplxk-worker-us-east-1a 1 1 1 1 33m prod-02-agl030519-vplxk-worker-us-east-1c 1 1 1 1 33m Using regular expressions, construct one or more patterns that match the name of any compute machine set that you want to set a priority level for. For example, use the regular expression pattern *fast* to match any compute machine set that includes the string fast in its name. Create a cluster-autoscaler-priority-expander.yml YAML file that defines a config map similar to the following: Example priority expander config map apiVersion: v1 kind: ConfigMap metadata: name: cluster-autoscaler-priority-expander 1 namespace: openshift-machine-api 2 data: priorities: |- 3 10: - .*fast.* - .*archive.* 40: - .*prod.* 1 You must name config map cluster-autoscaler-priority-expander . 2 You must create the config map in the same namespace as cluster autoscaler pod, which is the openshift-machine-api namespace. 3 Define the priority of your machine sets. The priorities values must be positive integers. The cluster autoscaler uses higher-value priorities before lower-value priorities. For each priority level, specify the regular expressions that correspond to the machine sets you want to use. Create the config map by running the following command: USD oc create configmap cluster-autoscaler-priority-expander \ --from-file=<location_of_config_map_file>/cluster-autoscaler-priority-expander.yml Verification Review the config map by running the following command: USD oc get configmaps cluster-autoscaler-priority-expander -o yaml steps To use the priority expander, ensure that the ClusterAutoscaler resource definition is configured to use the expanders: ["Priority"] parameter. 7.1.1.3. Labeling GPU machine sets for the cluster autoscaler You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes. Prerequisites Your cluster uses a cluster autoscaler. Procedure On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a cluster-api/accelerator label: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1 1 Specify a label of your choice that consists of alphanumeric characters, - , _ , or . and starts and ends with an alphanumeric character. For example, you might use nvidia-t4 to represent Nvidia T4 GPUs, or nvidia-a10g for A10G GPUs. Note You must specify the value of this label for the spec.resourceLimits.gpus.type parameter in your ClusterAutoscaler CR. For more information, see "Cluster autoscaler resource definition". 7.1.2. Deploying a cluster autoscaler To deploy a cluster autoscaler, you create an instance of the ClusterAutoscaler resource. Procedure Create a YAML file for a ClusterAutoscaler resource that contains the custom resource definition. Create the custom resource in the cluster by running the following command: USD oc create -f <filename>.yaml 1 1 <filename> is the name of the custom resource file. steps After you configure the cluster autoscaler, you must configure at least one machine autoscaler . 7.2. About the machine autoscaler The machine autoscaler adjusts the number of Machines in the compute machine sets that you deploy in an OpenShift Container Platform cluster. You can scale both the default worker compute machine set and any other compute machine sets that you create. The machine autoscaler makes more Machines when the cluster runs out of resources to support more deployments. Any changes to the values in MachineAutoscaler resources, such as the minimum or maximum number of instances, are immediately applied to the compute machine set they target. Important You must deploy a machine autoscaler for the cluster autoscaler to scale your machines. The cluster autoscaler uses the annotations on compute machine sets that the machine autoscaler sets to determine the resources that it can scale. If you define a cluster autoscaler without also defining machine autoscalers, the cluster autoscaler will never scale your cluster. 7.2.1. Configuring machine autoscalers After you deploy the cluster autoscaler, deploy MachineAutoscaler resources that reference the compute machine sets that are used to scale the cluster. Important You must deploy at least one MachineAutoscaler resource after you deploy the ClusterAutoscaler resource. Note You must configure separate resources for each compute machine set. Remember that compute machine sets are different in each region, so consider whether you want to enable machine scaling in multiple regions. The compute machine set that you scale must have at least one machine in it. 7.2.1.1. Machine autoscaler resource definition This MachineAutoscaler resource definition shows the parameters and sample values for the machine autoscaler. apiVersion: "autoscaling.openshift.io/v1beta1" kind: "MachineAutoscaler" metadata: name: "worker-us-east-1a" 1 namespace: "openshift-machine-api" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6 1 Specify the machine autoscaler name. To make it easier to identify which compute machine set this machine autoscaler scales, specify or include the name of the compute machine set to scale. The compute machine set name takes the following form: <clusterid>-<machineset>-<region> . 2 Specify the minimum number machines of the specified type that must remain in the specified zone after the cluster autoscaler initiates cluster scaling. If running in AWS, GCP, Azure, RHOSP, or vSphere, this value can be set to 0 . For other providers, do not set this value to 0 . You can save on costs by setting this value to 0 for use cases such as running expensive or limited-usage hardware that is used for specialized workloads, or by scaling a compute machine set with extra large machines. The cluster autoscaler scales the compute machine set down to zero if the machines are not in use. Important Do not set the spec.minReplicas value to 0 for the three compute machine sets that are created during the OpenShift Container Platform installation process for an installer provisioned infrastructure. 3 Specify the maximum number machines of the specified type that the cluster autoscaler can deploy in the specified zone after it initiates cluster scaling. Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition is large enough to allow the machine autoscaler to deploy this number of machines. 4 In this section, provide values that describe the existing compute machine set to scale. 5 The kind parameter value is always MachineSet . 6 The name value must match the name of an existing compute machine set, as shown in the metadata.name parameter value. 7.2.2. Deploying a machine autoscaler To deploy a machine autoscaler, you create an instance of the MachineAutoscaler resource. Procedure Create a YAML file for a MachineAutoscaler resource that contains the custom resource definition. Create the custom resource in the cluster by running the following command: USD oc create -f <filename>.yaml 1 1 <filename> is the name of the custom resource file. 7.3. Disabling autoscaling You can disable an individual machine autoscaler in your cluster or disable autoscaling on the cluster entirely. 7.3.1. Disabling a machine autoscaler To disable a machine autoscaler, you delete the corresponding MachineAutoscaler custom resource (CR). Note Disabling a machine autoscaler does not disable the cluster autoscaler. To disable the cluster autoscaler, follow the instructions in "Disabling the cluster autoscaler". Procedure List the MachineAutoscaler CRs for the cluster by running the following command: USD oc get MachineAutoscaler -n openshift-machine-api Example output NAME REF KIND REF NAME MIN MAX AGE compute-us-east-1a MachineSet compute-us-east-1a 1 12 39m compute-us-west-1a MachineSet compute-us-west-1a 2 4 37m Optional: Create a YAML file backup of the MachineAutoscaler CR by running the following command: USD oc get MachineAutoscaler/<machine_autoscaler_name> \ 1 -n openshift-machine-api \ -o yaml> <machine_autoscaler_name_backup>.yaml 2 1 <machine_autoscaler_name> is the name of the CR that you want to delete. 2 <machine_autoscaler_name_backup> is the name for the backup of the CR. Delete the MachineAutoscaler CR by running the following command: USD oc delete MachineAutoscaler/<machine_autoscaler_name> -n openshift-machine-api Example output machineautoscaler.autoscaling.openshift.io "compute-us-east-1a" deleted Verification To verify that the machine autoscaler is disabled, run the following command: USD oc get MachineAutoscaler -n openshift-machine-api The disabled machine autoscaler does not appear in the list of machine autoscalers. steps If you need to re-enable the machine autoscaler, use the <machine_autoscaler_name_backup>.yaml backup file and follow the instructions in "Deploying a machine autoscaler". Additional resources Disabling the cluster autoscaler Deploying a machine autoscaler 7.3.2. Disabling the cluster autoscaler To disable the cluster autoscaler, you delete the corresponding ClusterAutoscaler resource. Note Disabling the cluster autoscaler disables autoscaling on the cluster, even if the cluster has existing machine autoscalers. Procedure List the ClusterAutoscaler resource for the cluster by running the following command: USD oc get ClusterAutoscaler Example output NAME AGE default 42m Optional: Create a YAML file backup of the ClusterAutoscaler CR by running the following command: USD oc get ClusterAutoscaler/default \ 1 -o yaml> <cluster_autoscaler_backup_name>.yaml 2 1 default is the name of the ClusterAutoscaler CR. 2 <cluster_autoscaler_backup_name> is the name for the backup of the CR. Delete the ClusterAutoscaler CR by running the following command: USD oc delete ClusterAutoscaler/default Example output clusterautoscaler.autoscaling.openshift.io "default" deleted Verification To verify that the cluster autoscaler is disabled, run the following command: USD oc get ClusterAutoscaler Expected output No resources found steps Disabling the cluster autoscaler by deleting the ClusterAutoscaler CR prevents the cluster from autoscaling but does not delete any existing machine autoscalers on the cluster. To clean up unneeded machine autoscalers, see "Disabling a machine autoscaler". If you need to re-enable the cluster autoscaler, use the <cluster_autoscaler_name_backup>.yaml backup file and follow the instructions in "Deploying a cluster autoscaler". Additional resources Disabling the machine autoscaler Deploying a cluster autoscaler 7.4. Additional resources Including pod priority in pod scheduling decisions in OpenShift Container Platform | [
"apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: <gpu_type> 7 min: 0 8 max: 16 9 logVerbosity: 4 10 scaleDown: 11 enabled: true 12 delayAfterAdd: 10m 13 delayAfterDelete: 5m 14 delayAfterFailure: 30s 15 unneededTime: 5m 16 utilizationThreshold: \"0.4\" 17 expanders: [\"Random\"] 18",
"oc get machinesets.machine.openshift.io",
"NAME DESIRED CURRENT READY AVAILABLE AGE archive-agl030519-vplxk-worker-us-east-1c 1 1 1 1 25m fast-01-agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m fast-02-agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m fast-03-agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m fast-04-agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m prod-01-agl030519-vplxk-worker-us-east-1a 1 1 1 1 33m prod-02-agl030519-vplxk-worker-us-east-1c 1 1 1 1 33m",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-autoscaler-priority-expander 1 namespace: openshift-machine-api 2 data: priorities: |- 3 10: - .*fast.* - .*archive.* 40: - .*prod.*",
"oc create configmap cluster-autoscaler-priority-expander --from-file=<location_of_config_map_file>/cluster-autoscaler-priority-expander.yml",
"oc get configmaps cluster-autoscaler-priority-expander -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1",
"oc create -f <filename>.yaml 1",
"apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6",
"oc create -f <filename>.yaml 1",
"oc get MachineAutoscaler -n openshift-machine-api",
"NAME REF KIND REF NAME MIN MAX AGE compute-us-east-1a MachineSet compute-us-east-1a 1 12 39m compute-us-west-1a MachineSet compute-us-west-1a 2 4 37m",
"oc get MachineAutoscaler/<machine_autoscaler_name> \\ 1 -n openshift-machine-api -o yaml> <machine_autoscaler_name_backup>.yaml 2",
"oc delete MachineAutoscaler/<machine_autoscaler_name> -n openshift-machine-api",
"machineautoscaler.autoscaling.openshift.io \"compute-us-east-1a\" deleted",
"oc get MachineAutoscaler -n openshift-machine-api",
"oc get ClusterAutoscaler",
"NAME AGE default 42m",
"oc get ClusterAutoscaler/default \\ 1 -o yaml> <cluster_autoscaler_backup_name>.yaml 2",
"oc delete ClusterAutoscaler/default",
"clusterautoscaler.autoscaling.openshift.io \"default\" deleted",
"oc get ClusterAutoscaler",
"No resources found"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_management/applying-autoscaling |
Chapter 4. Using the Repository custom resource | Chapter 4. Using the Repository custom resource The Repository custom resource (CR) has the following primary functions: Inform Pipelines as Code about processing an event from a URL. Inform Pipelines as Code about the namespace for the pipeline runs. Reference an API secret, username, or an API URL necessary for Git provider platforms when using webhook methods. Provide the last pipeline run status for a repository. 4.1. Creating the Repository custom resource You can use the tkn pac CLI or other alternative methods to create a Repository custom resource (CR) inside the target namespace. For example: cat <<EOF|kubectl create -n my-pipeline-ci -f- 1 apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: project-repository spec: url: "https://github.com/<repository>/<project>" EOF 1 my-pipeline-ci is the target namespace. Whenever there is an event coming from the URL such as https://github.com/<repository>/<project> , Pipelines as Code matches it and then starts checking out the content of the <repository>/<project> repository for the pipeline run to match the content in the .tekton/ directory. Note You must create the Repository CR in the same namespace where pipelines associated with the source code repository will be executed; it cannot target a different namespace. If multiple Repository CRs match the same event, Pipelines as Code processes only the oldest one. If you need to match a specific namespace, add the pipelinesascode.tekton.dev/target-namespace: "<mynamespace>" annotation. Such explicit targeting prevents a malicious actor from executing a pipeline run in a namespace to which they do not have access. 4.2. Creating the global Repository custom resource Optionally, you can create a global Repository custom resource (CR) in the namespace where OpenShift Pipelines is installed, normally openshift-pipelines . If you create this CR, the settings that you specify in it apply by default to all Repository CRs that you create. Important The global Repository CR is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have administrator access to the openshift-pipelines namespace. You logged on to the OpenShift cluster using the oc command line utility. Procedure Create a Repository CR named pipeline-as-code in the openshift-pipelines namespace. Specify all the required default settings in this CR. Example command to create the CR USD cat <<EOF|oc create -n openshift-pipelines -f - apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: pipelines-as-code spec: git_provider: secret: name: "gitlab-webhook-config" key: "provider.token" webhook_secret: name: "gitlab-webhook-config" key: "webhook.secret" EOF In this example, all Repository CRs that you create include the common secrets for accessing your GitLab repositories. You can set different repository URLs and other settings in the CRs. 4.3. Setting concurrency limits You can use the concurrency_limit spec in the Repository custom resource definition (CRD) to define the maximum number of pipeline runs running simultaneously for a repository. apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: my-repo namespace: target-namespace spec: # ... concurrency_limit: <number> # ... If there are multiple pipeline runs matching an event, the pipeline runs that match the event start in an alphabetical order. For example, if you have three pipeline runs in the .tekton directory and you create a pull request with a concurrency_limit of 1 in the repository configuration, then all the pipeline runs are executed in an alphabetical order. At any given time, only one pipeline run is in the running state while the rest are queued. 4.4. Changing the source branch for the pipeline definition By default, when processing a push event or a pull request event, Pipelines as Code fetches the pipeline definition from the branch that triggered the event. You can use the pipelinerun_provenance setting in the Repository custom resource definition (CRD) to fetch the definition from the default branch configured on the Git repository provider, such as main , master , or trunk . apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: my-repo namespace: target-namespace spec: # ... settings: pipelinerun_provenance: "default_branch" # ... Note You can use this setting as a security precaution. With the default behaviour, Pipelines as Code uses the pipeline definition in the submitted pull request. With the default-branch setting, the pipeline definition must be merged into the default branch before it is run. This requirement ensures maximum possible verification of any changes during merge review. 4.5. Custom parameter expansion You can use Pipelines as Code to expand a custom parameter within your PipelineRun resource by using the params field. You can specify a value for the custom parameter inside the template of the Repository custom resource (CR). The specified value replaces the custom parameter in your pipeline run. You can use custom parameters in the following scenarios: To define a URL parameter, such as a registry URL that varies based on a push or a pull request. To define a parameter, such as an account UUID that an administrator can manage without necessitating changes to the PipelineRun execution in the Git repository. Note Use the custom parameter expansion feature only when you cannot use the Tekton PipelineRun parameters because Tekton parameters are defined in a Pipeline resource and customized alongside it inside a Git repository. However, custom parameters are defined and customized where the Repository CR is located. So, you cannot manage your CI/CD pipeline from a single point. The following example shows a custom parameter named company in the Repository CR: ... spec: params: - name: company value: "ABC Company" ... The value ABC Company replaces the parameter name company in your pipeline run and in the remotely fetched tasks. You can also retrieve the value for a custom parameter from a Kubernetes secret, as shown in the following example: ... spec: params: - name: company secretRef: name: my-secret key: companyname ... Pipelines as Code parses and uses custom parameters in the following manner: If you have a value and a secretRef defined, Pipelines as Code uses the value . If you do not have a name in the params section, Pipelines as Code does not parse the parameter. If you have multiple params with the same name , Pipelines as Code uses the last parameter. You can also define a custom parameter and use its expansion only when specified conditions were matched for a CEL filter. The following example shows a CEL filter applicable on a custom parameter named company when a pull request event is triggered: ... spec: params: - name: company value: "ABC Company" filter: - name: event value: | pac.event_type == "pull_request" ... Note When you have multiple parameters with the same name and different filters, Pipelines as Code uses the first parameter that matches the filter. So, Pipelines as Code allows you to expand parameters according to different event types. For example, you can combine a push and a pull request event. | [
"cat <<EOF|kubectl create -n my-pipeline-ci -f- 1 apiVersion: \"pipelinesascode.tekton.dev/v1alpha1\" kind: Repository metadata: name: project-repository spec: url: \"https://github.com/<repository>/<project>\" EOF",
"cat <<EOF|oc create -n openshift-pipelines -f - apiVersion: \"pipelinesascode.tekton.dev/v1alpha1\" kind: Repository metadata: name: pipelines-as-code spec: git_provider: secret: name: \"gitlab-webhook-config\" key: \"provider.token\" webhook_secret: name: \"gitlab-webhook-config\" key: \"webhook.secret\" EOF",
"apiVersion: \"pipelinesascode.tekton.dev/v1alpha1\" kind: Repository metadata: name: my-repo namespace: target-namespace spec: concurrency_limit: <number>",
"apiVersion: \"pipelinesascode.tekton.dev/v1alpha1\" kind: Repository metadata: name: my-repo namespace: target-namespace spec: settings: pipelinerun_provenance: \"default_branch\"",
"spec: params: - name: company value: \"ABC Company\"",
"spec: params: - name: company secretRef: name: my-secret key: companyname",
"spec: params: - name: company value: \"ABC Company\" filter: - name: event value: | pac.event_type == \"pull_request\""
] | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/pipelines_as_code/using-repository-crd |
Backing up and restoring the undercloud and control plane nodes | Backing up and restoring the undercloud and control plane nodes Red Hat OpenStack Platform 17.0 Creating and restoring backups of the undercloud and the overcloud control plane nodes | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/backing_up_and_restoring_the_undercloud_and_control_plane_nodes/index |
1.4. Security Threats | 1.4. Security Threats 1.4.1. Threats to Network Security Bad practices when configuring the following aspects of a network can increase the risk of an attack. Insecure Architectures A misconfigured network is a primary entry point for unauthorized users. Leaving a trust-based, open local network vulnerable to the highly-insecure Internet is much like leaving a door ajar in a crime-ridden neighborhood - nothing may happen for an arbitrary amount of time, but someone exploits the opportunity eventually . Broadcast Networks System administrators often fail to realize the importance of networking hardware in their security schemes. Simple hardware, such as hubs and routers, relies on the broadcast or non-switched principle; that is, whenever a node transmits data across the network to a recipient node, the hub or router sends a broadcast of the data packets until the recipient node receives and processes the data. This method is the most vulnerable to address resolution protocol ( ARP ) or media access control ( MAC ) address spoofing by both outside intruders and unauthorized users on local hosts. Centralized Servers Another potential networking pitfall is the use of centralized computing. A common cost-cutting measure for many businesses is to consolidate all services to a single powerful machine. This can be convenient as it is easier to manage and costs considerably less than multiple-server configurations. However, a centralized server introduces a single point of failure on the network. If the central server is compromised, it may render the network completely useless or worse, prone to data manipulation or theft. In these situations, a central server becomes an open door that allows access to the entire network. 1.4.2. Threats to Server Security Server security is as important as network security because servers often hold a great deal of an organization's vital information. If a server is compromised, all of its contents may become available for the cracker to steal or manipulate at will. The following sections detail some of the main issues. Unused Services and Open Ports A full installation of Red Hat Enterprise Linux 7 contains more than 1000 application and library packages. However, most server administrators do not opt to install every single package in the distribution, preferring instead to install a base installation of packages, including several server applications. See Section 2.3, "Installing the Minimum Amount of Packages Required" for an explanation of the reasons to limit the number of installed packages and for additional resources. A common occurrence among system administrators is to install the operating system without paying attention to what programs are actually being installed. This can be problematic because unneeded services may be installed, configured with the default settings, and possibly turned on. This can cause unwanted services, such as Telnet, DHCP, or DNS, to run on a server or workstation without the administrator realizing it, which in turn can cause unwanted traffic to the server or even a potential pathway into the system for crackers. See Section 4.3, "Securing Services" for information on closing ports and disabling unused services. Unpatched Services Most server applications that are included in a default installation are solid, thoroughly tested pieces of software. Having been in use in production environments for many years, their code has been thoroughly refined and many of the bugs have been found and fixed. However, there is no such thing as perfect software and there is always room for further refinement. Moreover, newer software is often not as rigorously tested as one might expect, because of its recent arrival to production environments or because it may not be as popular as other server software. Developers and system administrators often find exploitable bugs in server applications and publish the information on bug tracking and security-related websites such as the Bugtraq mailing list or the Computer Emergency Response Team (CERT) website ( http://www.cert.org ). Although these mechanisms are an effective way of alerting the community to security vulnerabilities, it is up to system administrators to patch their systems promptly. This is particularly true because crackers have access to these same vulnerability tracking services and will use the information to crack unpatched systems whenever they can. Good system administration requires vigilance, constant bug tracking, and proper system maintenance to ensure a more secure computing environment. See Chapter 3, Keeping Your System Up-to-Date for more information about keeping a system up-to-date. Inattentive Administration Administrators who fail to patch their systems are one of the greatest threats to server security. According to the SysAdmin, Audit, Network, Security Institute ( SANS ), the primary cause of computer security vulnerability is "assigning untrained people to maintain security and providing neither the training nor the time to make it possible to learn and do the job. This applies as much to inexperienced administrators as it does to overconfident or amotivated administrators. Some administrators fail to patch their servers and workstations, while others fail to watch log messages from the system kernel or network traffic. Another common error is when default passwords or keys to services are left unchanged. For example, some databases have default administration passwords because the database developers assume that the system administrator changes these passwords immediately after installation. If a database administrator fails to change this password, even an inexperienced cracker can use a widely-known default password to gain administrative privileges to the database. These are only a few examples of how inattentive administration can lead to compromised servers. Inherently Insecure Services Even the most vigilant organization can fall victim to vulnerabilities if the network services they choose are inherently insecure. For instance, there are many services developed under the assumption that they are used over trusted networks; however, this assumption fails as soon as the service becomes available over the Internet - which is itself inherently untrusted. One category of insecure network services are those that require unencrypted usernames and passwords for authentication. Telnet and FTP are two such services. If packet sniffing software is monitoring traffic between the remote user and such a service usernames and passwords can be easily intercepted. Inherently, such services can also more easily fall prey to what the security industry terms the man-in-the-middle attack. In this type of attack, a cracker redirects network traffic by tricking a cracked name server on the network to point to his machine instead of the intended server. Once someone opens a remote session to the server, the attacker's machine acts as an invisible conduit, sitting quietly between the remote service and the unsuspecting user capturing information. In this way a cracker can gather administrative passwords and raw data without the server or the user realizing it. Another category of insecure services include network file systems and information services such as NFS or NIS, which are developed explicitly for LAN usage but are, unfortunately, extended to include WANs (for remote users). NFS does not, by default, have any authentication or security mechanisms configured to prevent a cracker from mounting the NFS share and accessing anything contained therein. NIS, as well, has vital information that must be known by every computer on a network, including passwords and file permissions, within a plain text ASCII or DBM (ASCII-derived) database. A cracker who gains access to this database can then access every user account on a network, including the administrator's account. By default, Red Hat Enterprise Linux 7 is released with all such services turned off. However, since administrators often find themselves forced to use these services, careful configuration is critical. See Section 4.3, "Securing Services" for more information about setting up services in a safe manner. 1.4.3. Threats to Workstation and Home PC Security Workstations and home PCs may not be as prone to attack as networks or servers, but since they often contain sensitive data, such as credit card information, they are targeted by system crackers. Workstations can also be co-opted without the user's knowledge and used by attackers as "slave" machines in coordinated attacks. For these reasons, knowing the vulnerabilities of a workstation can save users the headache of reinstalling the operating system, or worse, recovering from data theft. Bad Passwords Bad passwords are one of the easiest ways for an attacker to gain access to a system. For more on how to avoid common pitfalls when creating a password, see Section 4.1.1, "Password Security" . Vulnerable Client Applications Although an administrator may have a fully secure and patched server, that does not mean remote users are secure when accessing it. For instance, if the server offers Telnet or FTP services over a public network, an attacker can capture the plain text usernames and passwords as they pass over the network, and then use the account information to access the remote user's workstation. Even when using secure protocols, such as SSH, a remote user may be vulnerable to certain attacks if they do not keep their client applications updated. For instance, v.1 SSH clients are vulnerable to an X-forwarding attack from malicious SSH servers. Once connected to the server, the attacker can quietly capture any keystrokes and mouse clicks made by the client over the network. This problem was fixed in the v.2 SSH protocol, but it is up to the user to keep track of what applications have such vulnerabilities and update them as necessary. Section 4.1, "Desktop Security" discusses in more detail what steps administrators and home users should take to limit the vulnerability of computer workstations. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-security_threats |
Chapter 24. Viewing the history log of a task | Chapter 24. Viewing the history log of a task You can view the history log of a task in Business Central from the Logs tab of task. The history log lists all the events in the "Date Time: Task event" format. Procedure In Business Central, go to Menu Track Task Inbox . On the Task Inbox page, click the task to open it. On the task page, click the Logs tab. All events that take place during the task life cycle is listed in the Logs tab. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/interacting-with-processes-viewing-task-history-log-proc |
Installing on IBM Z and IBM LinuxONE | Installing on IBM Z and IBM LinuxONE OpenShift Container Platform 4.14 Installing OpenShift Container Platform on IBM Z and IBM LinuxONE Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_ibm_z_and_ibm_linuxone/index |
Chapter 7. Resource Limits and Policies | Chapter 7. Resource Limits and Policies You can define resource limits and policies to control important aspects of how the broker instance should handle messages. The process for configuring these resource limits and policies is different in AMQ Broker 7 than in AMQ 6, and many of the configuration properties have changed. 7.1. How Resource Limits and Policies Are Configured In AMQ 6, resource limits and policies were configured as destination policies in the broker's configuration file. In AMQ Broker 7, you define resource limits and policies for an address or set of addresses. When the broker instance receives a message, the resource limits and policies defined for the message's address are applied to the message. To configure resource limits and policies in AMQ Broker 7, you use the BROKER_INSTANCE_DIR /etc/broker.xml configuration file to define <address-setting> elements with the appropriate configuration properties. The broker.xml configuration file contains the following default address settings configuration: <address-settings> <!-- if you define auto-create on certain queues, management has to be auto-create --> <address-setting match="activemq.management#"> 1 <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> <!--default for catch all--> <address-setting match="#"> 2 <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> </address-settings> 1 The default management address setting. The nested resource limits and policies are applied to all messages with an address that matches activemq.management# . 2 The default address setting. The # wildcard matches all addresses, so the defined resource limits and policies are applied to all messages. To configure resource limits and policies, you specify an address or set of addresses (using <address-setting> ), and then add resource limit and policy properties to it. These properties are applied to each message sent to the address (or addresses) that you specified. Related Information For more information on using wildcards to match sets of addresses, see AMQ Broker wildcard syntax in Configuring AMQ Broker . 7.2. Resource Limit and Policy Configuration Properties Like AMQ 6, in AMQ Broker 7, you can add resource limits and policies to control how the broker handles certain aspects of how and when messages are delivered, the number of delivery attempts that should be made, and when messages should expire. However, the configuration properties you use to define these resource limits and policies are different in AMQ Broker 7. This section compares the <policyEntry> configuration properties in AMQ 6 to the equivalent <address-setting> properties in AMQ Broker 7. For complete details on each configuration property in AMQ Broker 7, see Address Setting Configuration Elements in Configuring AMQ Broker . 7.2.1. Queue Management Configuration Properties The following table compares the queue management configuration properties in AMQ 6 to the equivalent properties in AMQ Broker 7: To set... In AMQ 6 In AMQ Broker 7 The memory limit memoryLimit Sets a memory limit for the destination . The default is none . <max-size-bytes> Sets the memory limit for the address . The default is -1 (no limit). The order of the messages by priority within the queue prioritizedMessages This is off by default, which means that messages are prioritized on the consumer (not the broker), and therefore are ordered based on the priorities of the messages on the consumer. Messages are automatically ordered by priority within the queue. How often the broker should scan for expired messages expiredMessagesPeriod <message-expiry-scan-period> The default is 30000 ms. Whether the broker should delete destinations that are inactive for a period of time gcInactiveDestinations The default is false . No equivalent. However, for automatically-created queues, you can set the queue to be automatically deleted when the last consumer is detached. For more information, see Configuring automatic creation and deletion of addresses and queues in Configuring AMQ Broker . The inactive timeout inactiveTimeoutBeforeGC The default is 60 seconds. No equivalent. However, for automatically-created queues, you can set the queue to be automatically deleted when the last consumer is detached. For more information, see Creating and Deleting Queues and Addresses Automatically in Configuring AMQ Broker . Whether the broker should use a separate thread when dispatching from a queue optimizedDispatch The default is false . This cannot be set for an address or queue. However, you can control it from the incoming connection on which the message arrives. Use the directDeliver property on an acceptor or connector to control whether the message should be delivered on the same thread on which it arrived. For more information, see Acceptor and Connector Configuration Parameters in Configuring AMQ Broker . 7.2.2. Producer Policy Configuration Properties The following table compares the producer policy configuration properties in AMQ 6 to the equivalent properties in AMQ Broker 7: To set... In AMQ 6 In AMQ Broker 7 Producer flow control producerFlowControl Sets the broker to throttle the producer. The throttling is achieved by either withholding the producer's acknowledgement, or by raising a javax.jms.ResourceAllocationException exception and propagating it back to the client when local resources have been exhausted (such as memory or storage). The default is true . For the address, set <max-size-bytes> to the size at which the producer should be throttled, and then set <address-full-policy> to BLOCK . Configuring these two properties will also throttle your existing AMQ 6 OpenWire producers. The amount of credits a producer can request at one time No equivalent. <producer-window-size> Limiting the window size sets a limit on the number of bytes that the producer can have "in-flight" at any one time, which can prevent the remote connection from becoming overloaded. 7.2.3. Consumer Policy Configuration Properties The following table compares the server-side destination policy configuration properties in AMQ 6 to the equivalent properties in AMQ Broker 7. These properties only apply to OpenWire clients: To set... In AMQ 6 In AMQ Broker 7 The queue prefetch queuePrefetch No equivalent on the broker. However, you can set the maximum size of messages (in bytes) that will be buffered on a consumer by setting the consumerWindowSize on the connection URL or directly on the ActiveMQConnectionFactory API. Whether to use the priority of a consumer when dispatching messages from a queue useConsumerPriority The default is true . This functionality does not exist in AMQ Broker 7. Whether to use the prefetch extension to enable the broker to dispatch "prefetched" messages when the message is delivered but not acknowledged usePrefetchExtension The default is true . This functionality does not exist in AMQ Broker 7. Initial redelivery delay initialRedeliveryDelay The default is 1000 ms. No equivalent. The broker instance automatically handles this. How long to wait before attempting to redeliver a canceled message redeliveryDelay The delivery delay if initialRedeliveryDelay is set to 0 . The default is 1000 ms. <redelivery-delay> The default is 0 ms. Exponential back-off useExponentialBackoff The default is false . No equivalent. You can use any of the other consumer policy configuration properties to configure redelivery for a consumer. Backoff multiplier backOffMultiplier The default is 5. <redelivery-multiplier> The multiplier to apply to the redelivery delay. The default is 1.0. The maximum number of times a cancelled message can be redelivered before it is returned to the broker's Dead Letter Queue maximumRedeliveries The default is 6. <max-delivery-attempts> The default is 10. The maximum value for the redelivery delay maximumRedeliveryDelay This is only applied if the useExponentialBackoff property is set. The default is -1 (no maximum redelivery delay). <max-redelivery-delay> The default is 0. The number of messages that a client can consume in a second No equivalent. No equivalent on the broker. However, you can set this on a consumer by setting the consumerMaxRate on the connection URL or directly on the ActiveMQConnectionFactory API. The consumerMaxRate property does not affect the number of messages that a client has in its buffer. Therefore, if the client has a slow rate limit and a high window size, the client's internal buffer would quickly fill up with messages. 7.2.4. Slow Consumer Handling Configuration Properties Like AMQ 6, AMQ Broker 7 can detect slow consumers and automatically stop the ones that are consistently slow. This was enabled by default in AMQ 6, but is disabled by default in AMQ Broker 7. The way in which the broker determines that a consumer is "slow" is also different. In AMQ Broker 7, a consumer is considered to be slow based on the number of messages the consumer has acknowledged. In AMQ 6, a consumer was considered to be slow based on the fullness of the prefetch buffer (if the buffer is consistently full, then the client may be consuming messages too slowly). The following table compares the slow consumer handling configuration properties in AMQ 6 to the equivalent properties in AMQ Broker 7: To set... In AMQ 6 In AMQ Broker 7 The number of times a consumer can be considered to be slow before it is aborted maxSlowCount The default is -1 (no limit). No equivalent. You can use the other slow consumer handling properties to control slow consumers. The amount of time a consumer can be continuously slow before it is aborted maxSlowDuration The default is 30000 ms. <slow-consumer-threshold> In AMQ Broker 7, this is the minimum rate of message consumption before a consumer is considered to be "slow" (measured in messages per second). The default is -1 (no threshold). The amount of time the broker should wait before performing another check for slow consumers checkPeriod The default is 30000 ms. <slow-consumer-check-period> In AMQ Broker 7, this is measured in seconds. The default is 5. Whether the broker should close the connection along with a slow consumer abortConnection The default is false . No equivalent. In AMQ Broker 7, when a slow consumer is aborted, the connection is also closed. The policy to apply if a slow consumer is detected. No equivalent. <slow-consumer-policy> The default is NOTIFY , which will send a CONSUMER_SLOW management notification to the application. You can also use the KILL policy to close the consumer's connection. However, this will impact any other client threads using that connection. Related Information For more information about how to handle slow consumers, see Handling Slow Consumers in Configuring AMQ Broker . 7.2.5. Message Paging Configuration Properties In AMQ Broker 7, the process by which the broker stores messages in memory and stores them to disk is significantly different than AMQ 6. Therefore, most of the paging configuration properties in AMQ 6 do not apply to AMQ Broker 7. In AMQ Broker 7, paging is configured on message addresses. Each address is configured to use a maximum number of bytes. When this limit is reached, messages sent to that address are paged to an on-disk buffer before they reach their queues. The queues are de-paged one page at a time when the address has enough available space. The following table compares the message paging size limits in AMQ 6 to the equivalent properties in AMQ Broker 7: To set... In AMQ 6 In AMQ Broker 7 The paging size maxPageSize This is measured in number of messages, and is variable based on the number of available messages. <page-size-bytes> This is measured in the physical page size in bytes (not messages). 7.2.6. Dead Letter Policy Configuration Properties AMQ Broker 7 handles undeliverable and expired messages much differently than AMQ 6. Dead letter policies are applied to addresses (instead of destinations), there are separate dead letter and expiry destinations (instead of a single dead letter queue), and the dead letter policy configuration is significantly different. Dead Letter Policies in AMQ 6 In AMQ 6, an expired or undeliverable message would be sent to the dead letter queue (DLQ) configured for each message's destination. To configure the DLQ for a destination, you could use any of the following dead letter policies: sharedDeadLetterStrategy The destination's undeliverable messages are sent to the shared, default DLQ called ActiveMQ.DLQ . individualDeadLetterStrategy The destination's undeliverable messages are sent to a dedicated DLQ for this destination. discardingDeadLetterStrategy The destination's undeliverable messages are discarded. Within a destination's dead letter policy, you could add the following configuration properties to control the types of messages that should be sent to the destination's DLQ: AMQ 6 Configuration Property Description processNotPersistent Whether non-persistent messages should be sent to the destination's DLQ. The default is false . processExpired Whether expired messages should be sent to the destination's DLQ. The default is true . expiration Whether an expiry should be applied to the messages sent to the destination's DLQ. The default is 0. Dead Letter Policies in AMQ 7 In AMQ Broker 7, undeliverable messages are sent to the applicable dead letter address , and expired messages are sent to the applicable expiry address . In the broker.xml configuration file, the default address setting specifies a dead letter address and expiry address. Undeliverable and expired messages will be delivered to the destinations specified by these settings: ... <address-settings> <address-setting match="#"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> ... </address-setting> ... </address-settings> ... By default, the dead letter and expiry addresses specify the DLQ and ExpiryQueue destinations, which are defined in the <addresses> section: ... <addresses> <address name="DLQ"> <anycast> <queue name="DLQ" /> </anycast> </address> <address name="ExpiryQueue"> <anycast> <queue name="ExpiryQueue" /> </anycast> </address> ... </addresses> ... To configure a non-default dead letter policy for an address, you can add a <dead-letter-address> and <expiry-address> to the address's <address-setting> and specify the DLQ and expiry queue it should use. Unlike AMQ 6, in AMQ Broker 7, you cannot set an expiry time on messages sent to the DLQ. In addition, both persistent and non-persistent messages are sent to the DLQ specified by the address's <dead-letter-address> . | [
"<address-settings> <!-- if you define auto-create on certain queues, management has to be auto-create --> <address-setting match=\"activemq.management#\"> 1 <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> <!--default for catch all--> <address-setting match=\"#\"> 2 <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> </address-settings>",
"<address-settings> <address-setting match=\"#\"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> </address-setting> </address-settings>",
"<addresses> <address name=\"DLQ\"> <anycast> <queue name=\"DLQ\" /> </anycast> </address> <address name=\"ExpiryQueue\"> <anycast> <queue name=\"ExpiryQueue\" /> </anycast> </address> </addresses>"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/migrating_to_red_hat_amq_7/resource_limits_and_policies |
Chapter 21. Installing an IdM replica | Chapter 21. Installing an IdM replica The following sections describe how to install an Identity Management (IdM) replica interactively, by using the command line (CLI). The replica installation process copies the configuration of the existing server and installs the replica based on that configuration. Note See Installing an Identity Management server using an Ansible playbook . Use Ansible roles to consistently install and customize multiple replicas. Interactive and non-interactive methods that do not use Ansible are useful in topologies where, for example, the replica preparation is delegated to a user or a third party. You can also use these methods in geographically distributed topologies where you do not have access from the Ansible controller node. Prerequisites You are installing one IdM replica at a time. The installation of multiple replicas at the same time is not supported. Ensure your system is prepared for IdM replica installation . Important If this preparation is not performed, installing an IdM replica will fail. 21.1. Installing an IdM replica with integrated DNS and a CA Follow this procedure to install an Identity Management (IdM) replica: With integrated DNS With a certificate authority (CA) You can do this to, for example, replicate the CA service for resiliency after installing an IdM server with an integrated CA. Important When configuring a replica with a CA, the CA configuration of the replica must mirror the CA configuration of the other server. For example, if the server includes an integrated IdM CA as the root CA, the new replica must also be installed with an integrated CA as the root CA. No other CA configuration is available in this case. Including the --setup-ca option in the ipa-replica-install command copies the CA configuration of the initial server. Prerequisites Ensure your system is prepared for an IdM replica installation . Procedure Enter ipa-replica-install with these options: --setup-dns to configure the replica as a DNS server --forwarder to specify a per-server forwarder, or --no-forwarder if you do not want to use any per-server forwarders. To specify multiple per-server forwarders for failover reasons, use --forwarder multiple times. Note The ipa-replica-install utility accepts a number of other options related to DNS settings, such as --no-reverse or --no-host-dns . For more information about them, see the ipa-replica-install (1) man page. --setup-ca to include a CA on the replica For example, to set up a replica with an integrated DNS server and a CA that forwards all DNS requests not managed by the IdM servers to the DNS server running on IP 192.0.2.1: After the installation completes, add a DNS delegation from the parent domain to the IdM DNS domain. For example, if the IdM DNS domain is idm.example.com , add a name server (NS) record to the example.com parent domain. Important Repeat this step each time after you install an IdM DNS server. 21.2. Installing an IdM replica with integrated DNS and no CA Follow this procedure to install an Identity Management (IdM) replica: With integrated DNS Without a certificate authority (CA) in an IdM environment in which a CA is already installed. The replica will forward all certificate operations to the IdM server with a CA installed. Prerequisites Ensure your system is prepared for an IdM replica installation . Procedure Enter ipa-replica-install with these options: --setup-dns to configure the replica as a DNS server --forwarder to specify a per-server forwarder, or --no-forwarder if you do not want to use any per-server forwarders. To specify multiple per-server forwarders for failover reasons, use --forwarder multiple times. For example, to set up a replica with an integrated DNS server that forwards all DNS requests not managed by the IdM servers to the DNS server running on IP 192.0.2.1: Note The ipa-replica-install utility accepts a number of other options related to DNS settings, such as --no-reverse or --no-host-dns . For more information about them, see the ipa-replica-install (1) man page. After the installation completes, add a DNS delegation from the parent domain to the IdM DNS domain. For example, if the IdM DNS domain is idm.example.com , add a name server (NS) record to the example.com parent domain. Important Repeat this step each time after you install an IdM DNS server. 21.3. Installing an IdM replica without integrated DNS and with a CA Follow this procedure to install an Identity Management (IdM) replica: Without integrated DNS With a certificate authority (CA) Important When configuring a replica with a CA, the CA configuration of the replica must mirror the CA configuration of the other server. For example, if the server includes an integrated IdM CA as the root CA, the new replica must also be installed with an integrated CA as the root CA. No other CA configuration is available in this case. Including the --setup-ca option in the ipa-replica-install command copies the CA configuration of the initial server. Prerequisites Ensure your system is prepared for an IdM replica installation . Procedure Enter ipa-replica-install with the --setup-ca option. Add the newly created IdM DNS service records to your DNS server: Export the IdM DNS service records into a file in the nsupdate format: Submit a DNS update request to your DNS server using the nsupdate utility and the dns_records_file.nsupdate file. For more information, see Updating External DNS Records Using nsupdate in RHEL 7 documentation. Alternatively, refer to your DNS server documentation for adding DNS records. 21.4. Installing an IdM replica without integrated DNS and without a CA Follow this procedure to install an Identity Management (IdM) replica: Without integrated DNS Without a certificate authority (CA) by providing the required certificates manually. The assumption here is that the first server was installed without a CA. Important You cannot install a server or replica using self-signed third-party server certificates because the imported certificate files must contain the full CA certificate chain of the CA that issued the LDAP and Apache server certificates. Prerequisites Ensure your system is prepared for an IdM replica installation . Procedure Enter ipa-replica-install , and provide the required certificate files by adding these options: --dirsrv-cert-file --dirsrv-pin --http-cert-file --http-pin For details about the files that are provided using these options, see Section 5.1, "Certificates required to install an IdM server without a CA" . For example: Note Do not add the --ca-cert-file option. The ipa-replica-install utility takes this part of the certificate information automatically from the first server you installed. 21.5. Installing an IdM hidden replica A hidden (unadvertised) replica is an Identity Management (IdM) server that has all services running and available. However, it has no SRV records in DNS, and LDAP server roles are not enabled. Therefore, clients cannot use service discovery to detect these hidden replicas. For further details about hidden replicas, see The hidden replica mode . Prerequisites Ensure your system is prepared for an IdM replica installation . Procedure To install a hidden replica, use the following command: Note that the command installs a replica without DNS SRV records and with disabled LDAP server roles. You can also change the mode of existing replica to hidden. For details, see Demotion and promotion of hidden replicas . 21.6. Testing an IdM replica After creating a replica, check if the replica replicates data as expected. You can use the following procedure. Procedure Create a user on the new replica: Make sure the user is visible on another replica: 21.7. Connections performed during an IdM replica installation Requests performed during an IdM replica installation lists the operations performed by ipa-replica-install , the Identity Management (IdM) replica installation tool. Table 21.1. Requests performed during an IdM replica installation Operation Protocol used Purpose DNS resolution against the DNS resolvers configured on the client system DNS To discover the IP addresses of IdM servers Requests to ports 88 (TCP/TCP6 and UDP/UDP6) on the discovered IdM servers Kerberos To obtain a Kerberos ticket JSON-RPC calls to the IdM Apache-based web-service on the discovered or configured IdM servers HTTPS IdM client enrollment; replica keys retrieval and certificate issuance if required Requests over TCP/TCP6 to port 389 on the IdM server, using SASL GSSAPI authentication, plain LDAP, or both LDAP IdM client enrollment; CA certificate chain retrieval; LDAP data replication Requests over TCP/TCP6 to port 22 on IdM server SSH To check if the connection is working (optionally) Access over port 8443 (TCP/TCP6) on the IdM servers HTTPS To administer the Certificate Authority on the IdM server (only during IdM server and replica installation) | [
"ipa-replica-install --setup-dns --forwarder 192.0.2.1 --setup-ca",
"ipa-replica-install --setup-dns --forwarder 192.0.2.1",
"ipa-replica-install --setup-ca",
"ipa dns-update-system-records --dry-run --out dns_records_file.nsupdate",
"ipa-replica-install --dirsrv-cert-file /tmp/server.crt --dirsrv-cert-file /tmp/server.key --dirsrv-pin secret --http-cert-file /tmp/server.crt --http-cert-file /tmp/server.key --http-pin secret",
"ipa-replica-install --hidden-replica",
"[admin@new_replica ~]USD ipa user-add test_user",
"[admin@another_replica ~]USD ipa user-show test_user"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/installing-an-ipa-replica_installing-identity-management |
Chapter 8. Installing a private cluster on Azure | Chapter 8. Installing a private cluster on Azure In OpenShift Container Platform version 4.16, you can install a private cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 8.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 8.2.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 8.2.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 8.2.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. Private cluster with network address translation You can use Azure VNET network address translation (NAT) to provide outbound internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions. When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with Azure Firewall You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation. When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with a proxy configuration You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy. When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure's internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints. Private cluster with no internet access You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following: An OpenShift image registry mirror that allows for pulling container images Access to Azure APIs With these requirements available, you can use user-defined routing to create private clusters with no public endpoints. 8.3. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.16, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 8.3.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 8.3.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 8.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs . A network security group rule is not needed. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 8.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 8.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources About the OpenShift SDN network plugin Configuring your firewall 8.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 8.3.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 8.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 8.7. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure 8.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.4. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 8.7.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 8.1. Machine types based on 64-bit x86 architecture standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 8.7.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 8.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 8.7.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 8.7.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 8.7.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24 1 10 14 21 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 13 Specify the name of the resource group that contains the DNS zone for your base domain. 15 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 16 If you use an existing VNet, specify the name of the resource group that contains it. 17 If you use an existing VNet, specify its name. 18 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 19 If you use an existing VNet, specify the name of the subnet to host the compute machines. 20 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 8.7.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 8.8. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 8.9. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 8.9.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 8.9.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 8.9.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Microsoft Azure account for the ccoctl utility to use with the following permissions: Example 8.3. Required Azure permissions Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 8.9.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 8.9.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 8.10. Optional: Preparing a private Microsoft Azure cluster for a private image registry By installing a private image registry on a private Microsoft Azure cluster, you can create private storage endpoints. Private storage endpoints disable public facing endpoints to the registry's storage account, adding an extra layer of security to your OpenShift Container Platform deployment. Important Do not install a private image registry on Microsoft Azure Red Hat OpenShift (ARO), because the endpoint can put your Microsoft Azure Red Hat OpenShift cluster in an unrecoverable state. Use the following guide to prepare your private Microsoft Azure cluster for installation with a private image registry. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI (oc). You have prepared an install-config.yaml that includes the following information: The publish field is set to Internal You have set the permissions for creating a private storage endpoint. For more information, see "Azure permissions for installer-provisioned infrastructure". Procedure If you have not previously created installation manifest files, do so by running the following command: USD ./openshift-install create manifests --dir <installation_directory> This command displays the following messages: Example output INFO Consuming Install Config from target directory INFO Manifests created in: <installation_directory>/manifests and <installation_directory>/openshift Create an image registry configuration object and pass in the networkResourceGroupName , subnetName , and vnetName provided by Microsoft Azure. For example: USD touch imageregistry-config.yaml apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: managementState: "Managed" replicas: 2 rolloutStrategy: RollingUpdate storage: azure: networkAccess: internal: networkResourceGroupName: <vnet_resource_group> 1 subnetName: <subnet_name> 2 vnetName: <vnet_name> 3 type: Internal 1 Optional. If you have an existing VNet and subnet setup, replace <vnet_resource_group> with the resource group name that contains the existing virtual network (VNet). 2 Optional. If you have an existing VNet and subnet setup, replace <subnet_name> with the name of the existing compute subnet within the specified resource group. 3 Optional. If you have an existing VNet and subnet setup, replace <vnet_name> with the name of the existing virtual network (VNet) in the specified resource group. Note The imageregistry-config.yaml file is consumed during the installation process. If desired, you must back it up before installation. Move the imageregistry-config.yaml file to the <installation_directory/manifests> folder by running the following command: USD mv imageregistry-config.yaml <installation_directory/manifests/> steps After you have moved the imageregistry-config.yaml file to the <installation_directory/manifests> folder and set the required permissions, proceed to "Deploying the cluster". Additional resources For the list of permissions needed to create a private storage endpoint, see Required Azure permissions for installer-provisioned infrastructure . 8.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity,leave this value blank. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.14. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"az login",
"ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7",
"ls <path_to_ccoctl_output_dir>/manifests",
"azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create manifests --dir <installation_directory>",
"INFO Consuming Install Config from target directory INFO Manifests created in: <installation_directory>/manifests and <installation_directory>/openshift",
"touch imageregistry-config.yaml",
"apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: managementState: \"Managed\" replicas: 2 rolloutStrategy: RollingUpdate storage: azure: networkAccess: internal: networkResourceGroupName: <vnet_resource_group> 1 subnetName: <subnet_name> 2 vnetName: <vnet_name> 3 type: Internal",
"mv imageregistry-config.yaml <installation_directory/manifests/>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_azure/installing-azure-private |
Installing on IBM Z and IBM LinuxONE | Installing on IBM Z and IBM LinuxONE OpenShift Container Platform 4.17 Installing OpenShift Container Platform on IBM Z and IBM LinuxONE Red Hat OpenShift Documentation Team | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.17.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 4 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 coreos.inst.secure_ipl \\ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 zfcp.allow_lun_scan=0",
"ipl c",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.17 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc debug node/<node_name> chroot /host",
"cat /sys/firmware/ipl/secure",
"1 1",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.17.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 4 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 coreos.inst.secure_ipl \\ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 zfcp.allow_lun_scan=0",
"ipl c",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.17 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc debug node/<node_name> chroot /host",
"cat /sys/firmware/ipl/secure",
"1 1",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"prot_virt: Reserving <amount>MB as ultravisor base storage.",
"cat /sys/firmware/uv/prot_virt_host",
"1",
"{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```",
"base64 <your-hostkey>.crt",
"gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign",
"[ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor.",
"Starting coreos-ignition-s...reOS Ignition User Config Setup [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key",
"variant: openshift version: 4.17.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 zfcp.allow_lun_scan=0",
"qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}",
"virt-install --noautoconsole --connect qemu:///system --name <vm_name> --memory <memory_mb> --vcpus <vcpus> --disk <disk> --launchSecurity type=\"s390-pv\" \\ 1 --import --network network=<virt_network_parm>,mac=<mac_address> --disk path=<ign_file>,format=raw,readonly=on,serial=ignition,startup_policy=optional 2",
"virt-install --connect qemu:///system --name <vm_name> --memory <memory_mb> --vcpus <vcpus> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ / 1 --disk <vm_name>.qcow2,size=<image_size>,cache=none,io=native --network network=<virt_network_parm> --boot hd --extra-args \"rd.neednet=1\" --extra-args \"coreos.inst.install_dev=/dev/<block_device>\" --extra-args \"coreos.inst.ignition_url=http://<http_server>/bootstrap.ign\" \\ 2 --extra-args \"coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img\" \\ 3 --extra-args \"ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns>\" --noautoconsole --wait",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.17 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"prot_virt: Reserving <amount>MB as ultravisor base storage.",
"cat /sys/firmware/uv/prot_virt_host",
"1",
"{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```",
"base64 <your-hostkey>.crt",
"gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign",
"[ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor.",
"Starting coreos-ignition-s...reOS Ignition User Config Setup [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key",
"variant: openshift version: 4.17.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 zfcp.allow_lun_scan=0",
"qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}",
"virt-install --noautoconsole --connect qemu:///system --name <vm_name> --memory <memory_mb> --vcpus <vcpus> --disk <disk> --launchSecurity type=\"s390-pv\" \\ 1 --import --network network=<virt_network_parm>,mac=<mac_address> --disk path=<ign_file>,format=raw,readonly=on,serial=ignition,startup_policy=optional 2",
"virt-install --connect qemu:///system --name <vm_name> --memory <memory_mb> --vcpus <vcpus> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ / 1 --disk <vm_name>.qcow2,size=<image_size>,cache=none,io=native --network network=<virt_network_parm> --boot hd --extra-args \"rd.neednet=1\" --extra-args \"coreos.inst.install_dev=/dev/<block_device>\" --extra-args \"coreos.inst.ignition_url=http://<http_server>/bootstrap.ign\" \\ 2 --extra-args \"coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img\" \\ 3 --extra-args \"ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns>\" --noautoconsole --wait",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.17 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.17.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 4 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 coreos.inst.secure_ipl \\ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 zfcp.allow_lun_scan=0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.17 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc debug node/<node_name> chroot /host",
"cat /sys/firmware/ipl/secure",
"1 1",
"lsreipl",
"Re-IPL type: fcp WWPN: 0x500507630400d1e3 LUN: 0x4001400e00000000 Device: 0.0.810e bootprog: 0 br_lba: 0 Loadparm: \"\" Bootparms: \"\" clear: 0",
"for DASD output: Re-IPL type: ccw Device: 0.0.525d Loadparm: \"\" clear: 0",
"sudo shutdown -h",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.17.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 4 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 coreos.inst.secure_ipl \\ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 zfcp.allow_lun_scan=0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.17 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc debug node/<node_name> chroot /host",
"cat /sys/firmware/ipl/secure",
"1 1",
"lsreipl",
"Re-IPL type: fcp WWPN: 0x500507630400d1e3 LUN: 0x4001400e00000000 Device: 0.0.810e bootprog: 0 br_lba: 0 Loadparm: \"\" Bootparms: \"\" clear: 0",
"for DASD output: Re-IPL type: ccw Device: 0.0.525d Loadparm: \"\" clear: 0",
"sudo shutdown -h",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker0 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker0]} nodeSelector: matchLabels: node-role.kubernetes.io/worker0: \"\"",
"ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.8000\", DRIVER==\"zfcp\", GOTO=\"cfg_zfcp_host_0.0.8000\" ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"zfcp\", TEST==\"[ccw/0.0.8000]\", GOTO=\"cfg_zfcp_host_0.0.8000\" GOTO=\"end_zfcp_host_0.0.8000\" LABEL=\"cfg_zfcp_host_0.0.8000\" ATTR{[ccw/0.0.8000]online}=\"1\" LABEL=\"end_zfcp_host_0.0.8000\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-host-0.0.8000.rules 3",
"ACTION==\"add\", SUBSYSTEMS==\"ccw\", KERNELS==\"0.0.8000\", GOTO=\"start_zfcp_lun_0.0.8207\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"start_zfcp_lun_0.0.8000\" SUBSYSTEM==\"fc_remote_ports\", ATTR{port_name}==\"0x500507680d760026\", GOTO=\"cfg_fc_0.0.8000_0x500507680d760026\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"cfg_fc_0.0.8000_0x500507680d760026\" ATTR{[ccw/0.0.8000]0x500507680d760026/unit_add}=\"0x00bc000000000000\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"end_zfcp_lun_0.0.8000\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-lun-0.0.8000:0x500507680d760026:0x00bc000000000000.rules 3",
"ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.4444\", DRIVER==\"dasd-eckd\", GOTO=\"cfg_dasd_eckd_0.0.4444\" ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"dasd-eckd\", TEST==\"[ccw/0.0.4444]\", GOTO=\"cfg_dasd_eckd_0.0.4444\" GOTO=\"end_dasd_eckd_0.0.4444\" LABEL=\"cfg_dasd_eckd_0.0.4444\" ATTR{[ccw/0.0.4444]online}=\"1\" LABEL=\"end_dasd_eckd_0.0.4444\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3",
"ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1000\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1001\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1002\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccwgroup\", KERNEL==\"0.0.1000\", DRIVER==\"qeth\", GOTO=\"cfg_qeth_0.0.1000\" GOTO=\"end_qeth_0.0.1000\" LABEL=\"group_qeth_0.0.1000\" TEST==\"[ccwgroup/0.0.1000]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1000]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1001]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1002]\", GOTO=\"end_qeth_0.0.1000\" ATTR{[drivers/ccwgroup:qeth]group}=\"0.0.1000,0.0.1001,0.0.1002\" GOTO=\"end_qeth_0.0.1000\" LABEL=\"cfg_qeth_0.0.1000\" ATTR{[ccwgroup/0.0.1000]online}=\"1\" LABEL=\"end_qeth_0.0.1000\"",
"base64 /path/to/file/",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3",
"ssh <user>@<node_ip_address>",
"oc debug node/<node_name>",
"sudo chzdev -e <device>",
"ssh <user>@<node_ip_address>",
"oc debug node/<node_name>",
"sudo /sbin/mpathconf --enable",
"sudo multipath",
"sudo fdisk /dev/mapper/mpatha",
"sudo multipath -ll",
"mpatha (20017380030290197) dm-1 IBM,2810XIV size=512G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw -+- policy='service-time 0' prio=50 status=enabled |- 1:0:0:6 sde 68:16 active ready running |- 1:0:1:6 sdf 69:24 active ready running |- 0:0:0:6 sdg 8:80 active ready running `- 0:0:1:6 sdh 66:48 active ready running"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/installing_on_ibm_z_and_ibm_linuxone/index |
3.3. Setting User Permissions | 3.3. Setting User Permissions By default, the root user and any user who is a member of the group haclient has full read/write access to the cluster configuration. As of Red Hat Enterprise Linux 6.6, you can use the pcs acl command to set permission for local users to allow read-only or read-write access to the cluster configuration by using access control lists (ACLs). Setting permissions for local users is a two-step process: Execute the pcs acl role create... command to create a role which defines the permissions for that role. Assign the role you created to a user with the pcs acl user create command. The following example procedure provides read-only access for a cluster configuration to a local user named rouser . This procedure requires that the user rouser exists on the local system and that the user rouser is a member of the group haclient . Enable Pacemaker ACLs with the enable-acl cluster property. Create a role named read-only with read-only permissions for the cib. Create the user rouser in the pcs ACL system and assign that user the read-only role. View the current ACLs. The following example procedure provides write access for a cluster configuration to a local user named wuser . This procedure requires that the user wuser exists on the local system and that the user wuser is a member of the group haclient . Enable Pacemaker ACLs with the enable-acl cluster property. Create a role named write-access with write permissions for the cib. Create the user wuser in the pcs ACL system and assign that user the write-access role. View the current ACLs. For further information about cluster ACLs, see the help screen for the pcs acl command. | [
"adduser rouser usermod -a -G haclient rouser",
"pcs property set enable-acl=true --force",
"pcs acl role create read-only description=\"Read access to cluster\" read xpath /cib",
"pcs acl user create rouser read-only",
"pcs acl User: rouser Roles: read-only Role: read-only Description: Read access to cluster Permission: read xpath /cib (read-only-read)",
"adduser wuser usermod -a -G haclient wuser",
"pcs property set enable-acl=true --force",
"pcs acl role create write-access description=\"Full access\" write xpath /cib",
"pcs acl user create wuser write-access",
"pcs acl User: rouser Roles: read-only User: wuser Roles: write-access Role: read-only Description: Read access to cluster Permission: read xpath /cib (read-only-read) Role: write-access Description: Full Access Permission: write xpath /cib (write-access-write)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-accesscontrol-haar |
6.1 Release Notes | 6.1 Release Notes Red Hat Ceph Storage 6.1 Release notes for Red Hat Ceph Storage 6.1 Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/6.1_release_notes/index |
Chapter 19. The volume_key function | Chapter 19. The volume_key function The volume_key function provides two tools, libvolume_key and volume_key . libvolume_key is a library for manipulating storage volume encryption keys and storing them separately from volumes. volume_key is an associated command line tool used to extract keys and passphrases in order to restore access to an encrypted hard drive. This is useful for when the primary user forgets their keys and passwords, after an employee leaves abruptly, or in order to extract data after a hardware or software failure corrupts the header of the encrypted volume. In a corporate setting, the IT help desk can use volume_key to back up the encryption keys before handing over the computer to the end user. Currently, volume_key only supports the LUKS volume encryption format. Note volume_key is not included in a standard install of Red Hat Enterprise Linux 6 server. For information on installing it, refer to http://fedoraproject.org/wiki/Disk_encryption_key_escrow_use_cases . 19.1. Commands The format for volume_key is: The operands and mode of operation for volume_key are determined by specifying one of the following options: --save This command expects the operand volume [ packet ]. If a packet is provided then volume_key will extract the keys and passphrases from it. If packet is not provided, then volume_key will extract the keys and passphrases from the volume , prompting the user where necessary. These keys and passphrases will then be stored in one or more output packets. --restore This command expects the operands volume packet . It then opens the volume and uses the keys and passphrases in the packet to make the volume accessible again, prompting the user where necessary, such as allowing the user to enter a new passphrase, for example. --setup-volume This command expects the operands volume packet name . It then opens the volume and uses the keys and passphrases in the packet to set up the volume for use of the decrypted data as name . Name is the name of a dm-crypt volume. This operation makes the decrypted volume available as /dev/mapper/ name . This operation does not permanently alter the volume by adding a new passphrase, for example. The user can access and modify the decrypted volume, modifying volume in the process. --reencrypt , --secrets , and --dump These three commands perform similar functions with varying output methods. They each require the operand packet , and each opens the packet , decrypting it where necessary. --reencrypt then stores the information in one or more new output packets. --secrets outputs the keys and passphrases contained in the packet . --dump outputs the content of the packet , though the keys and passphrases are not output by default. This can be changed by appending --with-secrets to the command. It is also possible to only dump the unencrypted parts of the packet, if any, by using the --unencrypted command. This does not require any passphrase or private key access. Each of these can be appended with the following options: -o , --output packet This command writes the default key or passphrase to the packet . The default key or passphrase depends on the volume format. Ensure it is one that is unlikely to expire, and will allow --restore to restore access to the volume. --output-format format This command uses the specified format for all output packets. Currently, format can be one of the following: asymmetric : uses CMS to encrypt the whole packet, and requires a certificate asymmetric_wrap_secret_only : wraps only the secret, or keys and passphrases, and requires a certificate passphrase : uses GPG to encrypt the whole packet, and requires a passphrase --create-random-passphrase packet This command generates a random alphanumeric passphrase, adds it to the volume (without affecting other passphrases), and then stores this random passphrase into the packet . | [
"volume_key [OPTION]... OPERAND"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/ch-volumekey |
Chapter 13. Using Kerberos (GSSAPI) authentication | Chapter 13. Using Kerberos (GSSAPI) authentication AMQ Streams supports the use of the Kerberos (GSSAPI) authentication protocol for secure single sign-on access to your Kafka cluster. GSSAPI is an API wrapper for Kerberos functionality, insulating applications from underlying implementation changes. Kerberos is a network authentication system that allows clients and servers to authenticate to each other by using symmetric encryption and a trusted third party, the Kerberos Key Distribution Centre (KDC). 13.1. Setting up AMQ Streams to use Kerberos (GSSAPI) authentication This procedure shows how to configure AMQ Streams so that Kafka clients can access Kafka and ZooKeeper using Kerberos (GSSAPI) authentication. The procedure assumes that a Kerberos krb5 resource server has been set up on a Red Hat Enterprise Linux host. The procedure shows, with examples, how to configure: Service principals Kafka brokers to use the Kerberos login ZooKeeper to use Kerberos login Producer and consumer clients to access Kafka using Kerberos authentication The instructions describe Kerberos set up for a single ZooKeeper and Kafka installation on a single host, with additional configuration for a producer and consumer client. Prerequisites To be able to configure Kafka and ZooKeeper to authenticate and authorize Kerberos credentials, you will need: Access to a Kerberos server A Kerberos client on each Kafka broker host For more information on the steps to set up a Kerberos server, and clients on broker hosts, see the example Kerberos on RHEL set up configuration . How you deploy Kerberos depends on your operating system. Red Hat recommends using Identity Management (IdM) when setting up Kerberos on Red Hat Enterprise Linux. Users of an Oracle or IBM JDK must install a Java Cryptography Extension (JCE). Oracle JCE IBM JCE Add service principals for authentication From your Kerberos server, create service principals (users) for ZooKeeper, Kafka brokers, and Kafka producer and consumer clients. Service principals must take the form SERVICE-NAME/FULLY-QUALIFIED-HOST-NAME@DOMAIN-REALM . Create the service principals, and keytabs that store the principal keys, through the Kerberos KDC. For example: zookeeper/[email protected] kafka/[email protected] producer1/[email protected] consumer1/[email protected] The ZooKeeper service principal must have the same hostname as the zookeeper.connect configuration in the Kafka config/server.properties file: zookeeper.connect= node1.example.redhat.com :2181 If the hostname is not the same, localhost is used and authentication will fail. Create a directory on the host and add the keytab files: For example: /opt/kafka/krb5/zookeeper-node1.keytab /opt/kafka/krb5/kafka-node1.keytab /opt/kafka/krb5/kafka-producer1.keytab /opt/kafka/krb5/kafka-consumer1.keytab Ensure the kafka user can access the directory: chown kafka:kafka -R /opt/kafka/krb5 Configure ZooKeeper to use a Kerberos Login Configure ZooKeeper to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for zookeeper . Create or modify the opt/kafka/config/jaas.conf file to support ZooKeeper client and server operations: Client { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true 1 storeKey=true 2 useTicketCache=false 3 keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" 4 principal="zookeeper/[email protected]"; 5 }; Server { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/[email protected]"; }; QuorumServer { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/[email protected]"; }; QuorumLearner { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/zookeeper-node1.keytab" principal="zookeeper/[email protected]"; }; 1 Set to true to get the principal key from the keytab. 2 Set to true to store the principal key. 3 Set to true to obtain the Ticket Granting Ticket (TGT) from the ticket cache. 4 The keyTab property points to the location of the keytab file copied from the Kerberos KDC. The location and file must be readable by the kafka user. 5 The principal property is configured to match the fully-qualified principal name created on the KDC host, which follows the format SERVICE-NAME/FULLY-QUALIFIED-HOST-NAME@DOMAIN-NAME . Edit opt/kafka/config/zookeeper.properties to use the updated JAAS configuration: # ... requireClientAuthScheme=sasl jaasLoginRenew=3600000 1 kerberos.removeHostFromPrincipal=false 2 kerberos.removeRealmFromPrincipal=false 3 quorum.auth.enableSasl=true 4 quorum.auth.learnerRequireSasl=true 5 quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner 6 quorum.auth.server.loginContext=QuorumServer quorum.auth.kerberos.servicePrincipal=zookeeper/_HOST 7 quorum.cnxn.threads.size=20 1 Controls the frequency for login renewal in milliseconds, which can be adjusted to suit ticket renewal intervals. Default is one hour. 2 Dictates whether the hostname is used as part of the login principal name. If using a single keytab for all nodes in the cluster, this is set to true . However, it is recommended to generate a separate keytab and fully-qualified principal for each broker host for troubleshooting. 3 Controls whether the realm name is stripped from the principal name for Kerberos negotiations. It is recommended that this setting is set as false . 4 Enables SASL authentication mechanisms for the ZooKeeper server and client. 5 The RequireSasl properties controls whether SASL authentication is required for quorum events, such as master elections. 6 The loginContext properties identify the name of the login context in the JAAS configuration used for authentication configuration of the specified component. The loginContext names correspond to the names of the relevant sections in the opt/kafka/config/jaas.conf file. 7 Controls the naming convention to be used to form the principal name used for identification. The placeholder _HOST is automatically resolved to the hostnames defined by the server.1 properties at runtime. Start ZooKeeper with JVM parameters to specify the Kerberos login configuration: su - kafka export EXTRA_ARGS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf"; /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties If you are not using the default service name ( zookeeper ), add the name using the -Dzookeeper.sasl.client.username= NAME parameter. Note If you are using the /etc/krb5.conf location, you do not need to specify -Djava.security.krb5.conf=/etc/krb5.conf when starting ZooKeeper, Kafka, or the Kafka producer and consumer. Configure the Kafka broker server to use a Kerberos login Configure Kafka to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for kafka . Modify the opt/kafka/config/jaas.conf file with the following elements: KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/opt/kafka/krb5/kafka-node1.keytab" principal="kafka/[email protected]"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab="/opt/kafka/krb5/kafka-node1.keytab" principal="kafka/[email protected]"; }; Configure each broker in the Kafka cluster by modifying the listener configuration in the config/server.properties file so the listeners use the SASL/GSSAPI login. Add the SASL protocol to the map of security protocols for the listener, and remove any unwanted protocols. For example: # ... broker.id=0 # ... listeners=SECURE://:9092,REPLICATION://:9094 1 inter.broker.listener.name=REPLICATION # ... listener.security.protocol.map=SECURE:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT 2 # .. sasl.enabled.mechanisms=GSSAPI 3 sasl.mechanism.inter.broker.protocol=GSSAPI 4 sasl.kerberos.service.name=kafka 5 ... 1 Two listeners are configured: a secure listener for general-purpose communications with clients (supporting TLS for communications), and a replication listener for inter-broker communications. 2 For TLS-enabled listeners, the protocol name is SASL_PLAINTEXT. For non-TLS-enabled connectors, the protocol name is SASL_PLAINTEXT. If SSL is not required, you can remove the ssl.* properties. 3 SASL mechanism for Kerberos authentication is GSSAPI . 4 Kerberos authentication for inter-broker communication. 5 The name of the service used for authentication requests is specified to distinguish it from other services that may also be using the same Kerberos configuration. Start the Kafka broker, with JVM parameters to specify the Kerberos login configuration: su - kafka export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties If the broker and ZooKeeper cluster were previously configured and working with a non-Kerberos-based authentication system, it is possible to start the ZooKeeper and broker cluster and check for configuration errors in the logs. After starting the broker and Zookeeper instances, the cluster is now configured for Kerberos authentication. Configure Kafka producer and consumer clients to use Kerberos authentication Configure Kafka producer and consumer clients to use the Kerberos Key Distribution Center (KDC) for authentication using the user principals and keytabs previously created for producer1 and consumer1 . Add the Kerberos configuration to the producer or consumer configuration file. For example: /opt/kafka/config/producer.properties # ... sasl.mechanism=GSSAPI 1 security.protocol=SASL_PLAINTEXT 2 sasl.kerberos.service.name=kafka 3 sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ 4 useKeyTab=true \ useTicketCache=false \ storeKey=true \ keyTab="/opt/kafka/krb5/producer1.keytab" \ principal="producer1/[email protected]"; # ... 1 Configuration for Kerberos (GSSAPI) authentication. 2 Kerberos uses the SASL plaintext (username/password) security protocol. 3 The service principal (user) for Kafka that was configured in the Kerberos KDC. 4 Configuration for the JAAS using the same properties defined in jaas.conf . /opt/kafka/config/consumer.properties # ... sasl.mechanism=GSSAPI security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ useKeyTab=true \ useTicketCache=false \ storeKey=true \ keyTab="/opt/kafka/krb5/consumer1.keytab" \ principal="consumer1/[email protected]"; # ... Run the clients to verify that you can send and receive messages from the Kafka brokers. Producer client: export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"; /opt/kafka/bin/kafka-console-producer.sh --producer.config /opt/kafka/config/producer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094 Consumer client: export KAFKA_HEAP_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true"; /opt/kafka/bin/kafka-console-consumer.sh --consumer.config /opt/kafka/config/consumer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094 Additional resources Kerberos man pages: krb5.conf(5), kinit(1), klist(1), and kdestroy(1) Example Kerberos server on RHEL set up configuration Example client application to authenticate with a Kafka cluster using Kerberos tickets | [
"zookeeper.connect= node1.example.redhat.com :2181",
"/opt/kafka/krb5/zookeeper-node1.keytab /opt/kafka/krb5/kafka-node1.keytab /opt/kafka/krb5/kafka-producer1.keytab /opt/kafka/krb5/kafka-consumer1.keytab",
"chown kafka:kafka -R /opt/kafka/krb5",
"Client { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true 1 storeKey=true 2 useTicketCache=false 3 keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" 4 principal=\"zookeeper/[email protected]\"; 5 }; Server { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" principal=\"zookeeper/[email protected]\"; }; QuorumServer { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" principal=\"zookeeper/[email protected]\"; }; QuorumLearner { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/zookeeper-node1.keytab\" principal=\"zookeeper/[email protected]\"; };",
"requireClientAuthScheme=sasl jaasLoginRenew=3600000 1 kerberos.removeHostFromPrincipal=false 2 kerberos.removeRealmFromPrincipal=false 3 quorum.auth.enableSasl=true 4 quorum.auth.learnerRequireSasl=true 5 quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner 6 quorum.auth.server.loginContext=QuorumServer quorum.auth.kerberos.servicePrincipal=zookeeper/_HOST 7 quorum.cnxn.threads.size=20",
"su - kafka export EXTRA_ARGS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties",
"KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab=\"/opt/kafka/krb5/kafka-node1.keytab\" principal=\"kafka/[email protected]\"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required debug=true useKeyTab=true storeKey=true useTicketCache=false keyTab=\"/opt/kafka/krb5/kafka-node1.keytab\" principal=\"kafka/[email protected]\"; };",
"broker.id=0 listeners=SECURE://:9092,REPLICATION://:9094 1 inter.broker.listener.name=REPLICATION listener.security.protocol.map=SECURE:SASL_PLAINTEXT,REPLICATION:SASL_PLAINTEXT 2 .. sasl.enabled.mechanisms=GSSAPI 3 sasl.mechanism.inter.broker.protocol=GSSAPI 4 sasl.kerberos.service.name=kafka 5",
"su - kafka export KAFKA_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties",
"sasl.mechanism=GSSAPI 1 security.protocol=SASL_PLAINTEXT 2 sasl.kerberos.service.name=kafka 3 sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \\ 4 useKeyTab=true useTicketCache=false storeKey=true keyTab=\"/opt/kafka/krb5/producer1.keytab\" principal=\"producer1/[email protected]\";",
"sasl.mechanism=GSSAPI security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true useTicketCache=false storeKey=true keyTab=\"/opt/kafka/krb5/consumer1.keytab\" principal=\"consumer1/[email protected]\";",
"export KAFKA_HEAP_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true\"; /opt/kafka/bin/kafka-console-producer.sh --producer.config /opt/kafka/config/producer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094",
"export KAFKA_HEAP_OPTS=\"-Djava.security.krb5.conf=/etc/krb5.conf -Dsun.security.krb5.debug=true\"; /opt/kafka/bin/kafka-console-consumer.sh --consumer.config /opt/kafka/config/consumer.properties --topic topic1 --bootstrap-server node1.example.redhat.com:9094"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_rhel/assembly-kerberos_str |
Chapter 9. Additional resources | Chapter 9. Additional resources Installation Installing and configuring Red Hat Decision Manager on Red Hat JBoss EAP 7.4 Installing and configuring Red Hat Decision Manager in a Red Hat JBoss EAP clustered environment Installing and configuring Red Hat Decision Manager on Red Hat JBoss Web Server Installing and configuring KIE Server on IBM WebSphere Application Server Installing and configuring KIE Server on Oracle WebLogic Server Integration Creating Red Hat Decision Manager business applications with Spring Boot Integrating Red Hat Fuse with Red Hat Decision Manager Integrating Red Hat Decision Manager with Red Hat Single Sign-On Red Hat build of OptaPlanner Developing Solvers with Red Hat Decision Manager OpenShift Deploying an Red Hat Decision Manager environment on Red Hat OpenShift Container Platform 4 using Operators Deploying an Red Hat Decision Manager environment on Red Hat OpenShift Container Platform 3 using templates | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/additional_resources |
probe::tcp.disconnect.return | probe::tcp.disconnect.return Name probe::tcp.disconnect.return - TCP socket disconnection complete Synopsis tcp.disconnect.return Values name Name of this probe ret Error code (0: no error) Context The process which disconnects tcp | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tcp-disconnect-return |
4.204. openldap | 4.204. openldap 4.204.1. RHBA-2011:1514 - openldap bug fix and enhancement update Updated openldap packages that fix number of bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. OpenLDAP is an open source suite of LDAP (Lightweight Directory Access Protocol) applications and development tools. LDAP is a set of protocols for accessing directory services (usually phone book style information, but other information is possible) over the Internet, similar to the way DNS (Domain Name System) information is propagated over the Internet. The openldap package contains configuration files, libraries, and documentation for OpenLDAP. Bug Fixes BZ# 717738 In a utility which uses both OpenLDAP and Mozilla NSS (Network Security Services) libraries, OpenLDAP validates TLS peer and the certificate is cached by Mozilla NSS library. The utility then sometimes terminated unexpectedly on the NSS_Shutdown() function call because the client certificate was not freed and the cache could not be destroyed. With this update, the peer certificate is freed in OpenLDAP library after certificate validation is finished, all cache entries can now be deleted properly, and the NSS_Shutdown() call now succeeds as expected. BZ# 726984 When a program used the OpenLDAP library to securely connect to an LDAP server using SSL/TLS, while the server was using a certificate with a wildcarded common name (for example CN=*.example.com ), the connection to the server failed. With this update, the library has been fixed to verify wildcard hostnames used in certificates correctly, and the connection to the server now succeeds if the wildcard common name matches the server name. BZ# 727533 Previously, if an OpenLDAP server was installed with an SQL back end, the server terminated unexpectedly after a few operations. An upstream patch, which updates data types for storing the length of the values by using the ODBC (Open Database Connectivity) interface, has been provided to address this issue. Now, the server no longer crashes when the SQL back end is used. BZ# 684810 The slapd-config(5) and ldap.conf(5) manual pages contained incorrect information about TLS settings. This update adds new TLS documentation relevant for the Mozilla NSS cryptographic library. BZ# 698921 When an LDIF (LDAP Data Interchange Format) input file was passed to the ldapadd utility or another openldap client tool, and the file was not terminated by a newline character, the client terminated unexpectedly. With this update, client utilities are able to properly handle such LDIF files, and the crashes no longer occur in the described scenario. BZ# 701227 When an LDIF (LDAP Data Interchange Format) input file was passed to the ldapadd utility or another openldap client tool, and a line in the file was split into two lines but was missing correct indentation (the second line has to be indented by one space character), the client terminated unexpectedly. With this update, client utilities are able to properly handle such filetype LDIF files, and the crashes no longer occur in the described scenario. BZ# 709407 When an OpenLDAP server was under heavy load or multiple replicating OpenLDAP servers were running, and, at the same time, TLS/SSL mode with certificates in PEM (Privacy Enhanced Mail) format was enabled, a race condition caused the server to terminate unexpectedly after a random amount of time (ranging from minutes to weeks). With this update, a mutex has been added to the code to protect calls of thread-unsafe Mozilla NSS functions dealing with PEM certificates, and the crashes no longer occur in the described scenario. BZ# 712358 When the openldap-servers package was installed on a machine while the initscript package was not already installed, some scriptlets terminated during installation and error messages were returned. With this update, initscripts have been defined as a required package for openldap-servers , and no error messages are now returned in the described scenario. BZ# 713525 When an openldap client had the TLS_REQCERT option set to never and the TLS_CACERTDIR option set to an empty directory, TLS connection attempts to a remote server failed as TLS could not be initialized on the client side. Now, TLS_CACERTDIR errors are ignored when TLS_REQCERT is set to never , thus fixing this bug. BZ# 722923 When a slapd.conf file was converted into a new slapd.d directory while the constraint overlay was in place, the constraint_attribute option of the size or count type was converted to the olcConstraintAttribute option with its value part missing. A patch has been provided to address this issue and constraint_attribute options are now converted correctly in the described scenario. BZ# 722959 When an openldap client had the TLS_REQCERT option set to never and the remote LDAP server uses a certificate issued by a CA (Certificate Authority) whose certificate has expired, connection attempts to the server failed due to the expired certificate. Now, expired CA certificates are ignored when TLS_REQCERT is set to never , thus fixing this bug. BZ# 723487 Previously, the openldap package compilation log file contained warning messages returned by strict-aliasing rules. These warnings indicated that unexpected runtime behavior could occur. With this update, the -fno-strict-aliasing option is passed to the compiler to avoid optimizations that can produce invalid code, and no warning messages are now returned during the package compilation. BZ# 723514 Previously, the olcDDStolerance option was shortening TTL (time to live) for dynamic entries, instead of prolonging it. Consequently, when an OpenLDAP server was configured with the dds overlay and the olcDDStolerance option was enabled, the dynamic entries were deleted before their TTL expired. A patch has been provided to address this issue and the real lifetime of a dynamic entry is now calculated properly, as described in documentation. BZ# 729087 When a utility used the OpenLDAP library and TLS to connect to a server, while the library failed to verify a certificate or a key, a memory leak occurred in the tlsm_find_and_verify_cert_key() function. Now, verified certificates and keys are properly disposed of when their verification fails, and memory leaks no longer occur in the described scenario. BZ# 729095 When the olcVerifyClient option was set to allow in an OpenLDAP server or the TLS_REQCERT option was set to allow in a client utility, while the remote peer certificate was invalid, OpenLDAP server/client connection failed. With this update, invalid remote peer certificates are ignored, and connections can now be established in the described scenario. BZ# 731168 When multiple TLS operations were performed by clients or other replicated servers, with the openldap-servers package installed and TLS enabled, the server terminated unexpectedly. With this update, a mutex has been added to the code to protect calls of thread-unsafe Mozilla NSS initialization functions, and the crashes no longer occur in the described scenario. BZ# 732001 When the openldap-servers package was being installed on a server for the first time, redundant and confusing / character was printed during the installation. With this update, the responsible RPM scriptlet has been fixed and the / character is no longer printed in the described scenario. BZ# 723521 Previously, the slapo-unique manual page was missing information about quoting the keywords and URIs (uniform resource identifiers), and the attribute parameter was not described in the section about unique_strict configuration options. A patch has been provided to address these issues and the manual page is now up-to-date. BZ# 742592 Previously, when the openldap-servers package was installed, host-based ACLs did not work. With this update, configuration flags that enable TCP wrappers have been updated, and the host-based ACLs now work as expected. Enhancements BZ# 730311 Previously, when a connection to an LDAP server was created by specifying search root DN (distinguished name) instead of the server hostname, the SRV records in DNS were requested and a list of LDAP server hostnames was generated. The servers were then queried in the order, in which the DNS server returned them but the priority and weight of the records were ignored. This update adds support for priority/weight of the DNS SRV records, and the servers are now queried according to their priority/weight, as required by RFC 2782. BZ# 712494 In the default installation of the openldap-servers package, the configuration database ( cn=config ) could only be modified manually when the slapd daemon was not running. With this update, the ldapi:/// interface has been enabled by default, and the ACLs (access control lists) now enable the root user to modify the server configuration without stopping the server and using OpenLDAP client tools if he is authenticated using ldapi:/// and the SASL/EXTERNAL mechanism. BZ# 723999 The openldap package was compiled without RELRO (read-only relocations) flags and was therefore vulnerable to various attacks based on overwriting the ELF section of a program. To increase the security of the package, the openldap spec file has been modified to use the -Wl,-z,relro flags when compiling the package. The openldap package is now provided with partial RELRO protection. Users of openldap are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/openldap |
Chapter 1. General Introduction to Virtualization | Chapter 1. General Introduction to Virtualization 1.1. What is Virtualization? Virtualization is a broad computing term used for running software, usually multiple operating systems, concurrently and in isolation from other programs on a single system. Virtualization is accomplished by using a hypervisor . This is a software layer or subsystem that controls hardware and enables running multiple operating systems, called virtual machines (VMs) or guests , on a single (usually physical) machine. This machine with its operating system is called a host . For more information, see the Red Hat Customer Portal . There are several virtualization methods: Full virtualization Full virtualization uses an unmodified version of the guest operating system. The guest addresses the host's CPU via a channel created by the hypervisor. Because the guest communicates directly with the CPU, this is the fastest virtualization method. Paravirtualization Paravirtualization uses a modified guest operating system. The guest communicates with the hypervisor. The hypervisor passes the unmodified calls from the guest to the CPU and other interfaces, both real and virtual. Because the calls are routed through the hypervisor, this method is slower than full virtualization. Software virtualization (or emulation) Software virtualization uses binary translation and other emulation techniques to run unmodified operating systems. The hypervisor translates the guest calls to a format that can be used by the host system. Because all calls are translated, this method is slower than virtualization. Note that Red Hat does not support software virtualization on Red Hat Enterprise Linux. Containerization While KVM virtualization creates a separate instance of OS kernel, operating-system-level virtualization, also known as containerization , operates on top of an existing OS kernel and creates isolated instances of the host OS, known as containers . For more information on containers, see the Red Hat Customer Portal . Containers do not have the versatility of KVM virtualization, but are more lightweight and flexible to handle. For a more detailed comparison, see the Introduction to Linux Containers . To use containers on Red Hat Enterprise Linux, install the docker packages from the Extras channel . Note that Red Hat also offers optimized solutions for using containers, such as Red Hat Enterprise Linux Atomic Host and Red Hat OpenShift Container Platform . For details on container support, see the Red Hat KnowledgeBase . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_getting_started_guide/chap-virtualization_getting_started-what_is_it |
Chapter 7. Managing content views | Chapter 7. Managing content views Red Hat Satellite uses content views to allow your hosts access to a deliberately curated subset of content. To do this, you must define which repositories to use and then apply certain filters to the content. The general workflow for creating content views for filtering and creating snapshots is as follows: Create a content view. Add one or more repositories that you want to the content view. Optional: Create one or more filters to refine the content of the content view. For more information, see Section 7.14, "Content filter examples" . Optional: Resolve any package dependencies for a content view. For more information, see Section 7.12, "Resolving package dependencies" . Publish the content view. Optional: Promote the content view to another environment. For more information, see Section 7.8, "Promoting a content view" . Attach the content host to the content view. If a repository is not associated with the content view, the file /etc/yum.repos.d/redhat.repo remains empty and systems registered to it cannot receive updates. Hosts can only be associated with a single content view. To associate a host with multiple content views, create a composite content view. For more information, see Section 7.10, "Creating a composite content view" . 7.1. Content views in Red Hat Satellite A content view is a deliberately curated subset of content that your hosts can access. By creating a content view, you can define the software versions used by a particular environment or Capsule Server. Each content view creates a set of repositories across each environment. Your Satellite Server stores and manages these repositories. For example, you can create content views in the following ways: A content view with older package versions for a production environment and another content view with newer package versions for a Development environment. A content view with a package repository required by an operating system and another content view with a package repository required by an application. A composite content view for a modular approach to managing content views. For example, you can use one content view for content for managing an operating system and another content view for content for managing an application. By creating a composite content view that combines both content views, you create a new repository that merges the repositories from each of the content views. However, the repositories for the content views still exist and you can keep managing them separately as well. Default Organization View A Default Organization View is an application-controlled content view for all content that is synchronized to Satellite. You can register a host to the Library environment on Satellite to consume the Default Organization View without configuring content views and lifecycle environments. Promoting a content view across environments When you promote a content view from one environment to the environment in the application lifecycle, Satellite updates the repository and publishes the packages. Example 7.1. Promoting a package from Development to Testing The repositories for Testing and Production contain the my-software -1.0-0.noarch.rpm package: Development Testing Production Version of the content view Version 2 Version 1 Version 1 Contents of the content view my-software -1.1-0.noarch.rpm my-software -1.0-0.noarch.rpm my-software -1.0-0.noarch.rpm If you promote Version 2 of the content view from Development to Testing , the repository for Testing updates to contain the my-software -1.1-0.noarch.rpm package: Development Testing Production Version of the content view Version 2 Version 2 Version 1 Contents of the content view my-software -1.1-0.noarch.rpm my-software -1.1-0.noarch.rpm my-software -1.0-0.noarch.rpm This ensures hosts are designated to a specific environment but receive updates when that environment uses a new version of the content view. 7.2. Best practices for content views Content views that bundle content, such as Red Hat Enterprise Linux and additional software like Apache-2.4 or PostgreSQL-16.2 , are easier to maintain. Content views that are too small require more maintenance. If you require daily updated content, use the content view Default Organization View , which contains the latest synchronized content from all repositories and is available in the Library lifecycle environment. Restrict composite content views to situations that require greater flexibility, for example, if you update one content view on a weekly basis and another content view on a monthly basis. If you use composite content views, first publish the content views and then publish the composite content views. The more content views you bundle into composite content views, the more effort is needed to change or update content. Setting a lifecycle environment for content views is unnecessary if they are solely bundled to a composite content view. Automate creating and publishing composite content views and lifecycle environments by using a Hammer script or an Ansible Playbook . Use cron jobs, systemd timers, or recurring logics for more visibility. Add the changes and date to the description of each published content view or composite content view version. The most recent activity, such as moving content to a new lifecycle environment, is displayed by date in the Satellite web UI, regardless of the latest changes to the content itself. Publishing a new content view or composite content view creates a new major version. Incremental errata updates increment the minor version. Note that you cannot change or reset this counter. 7.3. Best practices for patching content hosts Registering hosts to Satellite requires Red Hat Satellite Client 6, which contains the subscription-manager package, katello-host-tools package, and their dependencies. For more information, see Registering hosts in Managing hosts . Use the Satellite web UI to install, upgrade, and remove packages from hosts. You can update content hosts with job templates using SSH and Ansible. Apply errata on content hosts using the Satellite web UI. When patching packages on hosts using the default package manager, Satellite receives a list of packages and repositories to recalculate applicable errata and available updates. Modify or replace job templates to add custom steps. This allows you to run commands or execute scripts on hosts. When running bulk actions on hosts, bundle them by major operating system version, especially when upgrading packages. Select via remote execution - customize first to define the time when patches are applied to hosts when performing bulk actions. You cannot apply errata to packages that are not part of the repositories on Satellite and the attached content view. Modifications to installed packages using rpm or dpkg are sent to Satellite with the run of apt , yum , or zypper . 7.4. Creating a content view Use this procedure to create a simple content view. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites While you can stipulate whether you want to resolve any package dependencies on a content view by content view basis, you might want to change the default Satellite settings to enable or disable package resolution for all content views. For more information, see Section 7.12, "Resolving package dependencies" . Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Click Create content view . In the Name field, enter a name for the view. Satellite automatically completes the Label field from the name you enter. In the Description field, enter a description of the view. In the Type field, select a Content view or a Composite content view . Optional: If you want to solve dependencies automatically every time you publish this content view, select the Solve dependencies checkbox. Dependency solving slows the publishing time and might ignore any content view filters you use. This can also cause errors when resolving dependencies for errata. Click Create content view . Content view steps Click Create content view to create the content view. In the Repositories tab, select the repository from the Type list that you want to add to your content view, select the checkbox to the available repositories you want to add, then click Add repositories . Click Publish new version and in the Description field, enter information about the version to log changes. Optional: You can enable a promotion path by clicking Promote to Select a lifecycle environment from the available promotion paths to promote new version . Click . On the Review page, you can review the environments you are trying to publish. Click Finish . You can view the content view on the Content Views page. To view more information about the content view, click the content view name. To register a host to your content view, see Registering Hosts in Managing hosts . CLI procedure Obtain a list of repository IDs: Create the content view and add repositories: For the --repository-ids option, you can find the IDs in the output of the hammer repository list command. Publish the view: Optional: To add a repository to an existing content view, enter the following command: Satellite Server creates the new version of the view and publishes it to the Library environment. 7.5. Copying a content view You can copy a content view in the Satellite web UI or you can use the Hammer CLI to copy an existing content view into a new content view. To use the CLI instead of the Satellite web UI, see the CLI procedure . Note A copied content view does not have the same history as the original content view. Version 1 of the copied content view begins at the last version of the original content view. As a result, you cannot promote an older version of a content view from the copied content view. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Select the content view you want to copy. Click the vertical ellipsis icon and click Copy . In the Name field, enter a name for the new content view and click Copy content view . Verification The copied content view appears on the Content views page. CLI procedure Copy the content view by using Hammer: Verification The Hammer command reports: 7.6. Synchronizing a content view to a Capsule In the Satellite web UI, you can only synchronize all selected lifecycle environments simultaneously. If you need to synchronize smaller items, such as individual lifecycle environments, single content views, and single repositories, use the Hammer CLI. CLI procedure Synchronize a content view to your Capsule by using Hammer: Additional resources For more information about the command, run hammer capsule content synchronize --help . 7.7. Viewing module streams In Satellite, you can view the module streams of the repositories in your content views. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to a published version of a Content View > Module Streams to view the module streams that are available for the Content Types. Use the Search field to search for specific modules. To view the information about the module, click the module and its corresponding tabs to include Details , Repositories , Profiles , and Artifacts . CLI procedure List all organizations: View all module streams for your organization: 7.8. Promoting a content view Use this procedure to promote content views across different lifecycle environments. To use the CLI instead of the Satellite web UI, see the CLI procedure . Permission requirements for content view promotion Non-administrator users require two permissions to promote a content view to an environment: promote_or_remove_content_views promote_or_remove_content_views_to_environment . The promote_or_remove_content_views permission restricts which content views a user can promote. The promote_or_remove_content_views_to_environment permission restricts the environments to which a user can promote content views. With these permissions you can assign users permissions to promote certain content views to certain environments, but not to other environments. For example, you can limit a user so that they are permitted to promote to test environments, but not to production environments. You must assign both permissions to a user to allow them to promote content views. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Select the content view that you want to promote. Select the version that you want to promote, click the vertical ellipsis icon, and click Promote . Select the environment where you want to promote the content view and click Promote . Now the repository for the content view appears in all environments. CLI procedure Promote the content view using Hammer for each lifecycle environment: Now the database content is available in all environments. Alternatively, you can promote content views across all lifecycle environments within an organization using the following Bash script: ORG=" My_Organization " CVV_ID= My_Content_View_Version_ID for i in USD(hammer --no-headers --csv lifecycle-environment list --organization USDORG | awk -F, {'print USD1'} | sort -n) do hammer content-view version promote --organization USDORG --to-lifecycle-environment-id USDi --id USDCVV_ID done Verification Display information about your content view version to verify that it is promoted to the required lifecycle environments: steps To register a host to your content view, see Registering Hosts in Managing hosts . 7.9. Composite content views overview A composite content view combines the content from several content views. For example, you might have separate content views to manage an operating system and an application individually. You can use a composite content view to merge the contents of both content views into a new repository. The repositories for the original content views still exist but a new repository also exists for the combined content. If you want to develop an application that supports different database servers. The example_application appears as: example_software Application Database Operating System Example of four separate content views: Red Hat Enterprise Linux (Operating System) PostgreSQL (Database) MariaDB (Database) example_software (Application) From the content views, you can create two composite content views. Example composite content view for a PostgreSQL database: Composite content view 1 - example_software on PostgreSQL example_software (Application) PostgreSQL (Database) Red Hat Enterprise Linux (Operating System) Example composite content view for a MariaDB: Composite content view 2 - example_software on MariaDB example_software (Application) MariaDB (Database) Red Hat Enterprise Linux (Operating System) Each content view is then managed and published separately. When you create a version of the application, you publish a new version of the composite content views. You can also select the Auto Publish option when creating a composite content view, and then the composite content view is automatically republished when a content view it includes is republished. Repository restrictions Docker repositories cannot be included more than once in a composite content view. For example, if you attempt to include two content views that contain the same docker repository in a composite content view, Satellite Server reports an error. 7.10. Creating a composite content view Use this procedure to create a composite content view. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Click Create content view . In the Create content view window, enter a name for the view in the Name field. Red Hat Satellite automatically completes the Label field from the name you enter. Optional: In the Description field, enter a description of the view. On the Type tab, select Composite content view . Optional: If you want to automatically publish a new version of the composite content view when a content view is republished, select the Auto publish checkbox. Click Create content view . On the Content views tab, select the content views that you want to add to the composite content view, and then click Add content views . In the Add content views window, select the version of each content view. Optional: If you want to automatically update the content view to the latest version, select the Always update to latest version checkbox. Click Add , then click Publish new version . Optional: In the Description field, enter a description of the content view. In the Publish window, set the Promote switch, then select the lifecycle environment. Click , then click Finish . CLI procedure Before you create the composite content views, list the version IDs for your existing content views: Create a new composite content view. When the --auto-publish option is set to yes , the composite content view is automatically republished when a content view it includes is republished: Add a content view to the composite content view. You can identify content view, content view version, and Organization in the commands by either their ID or their name. To add multiple content views to the composite content view, repeat this step for every content view you want to include. If you have the Always update to latest version option enabled for the content view: If you have the Always update to latest version option disabled for the content view: Publish the composite content view: Promote the composite content view across all environments: 7.11. Content filter overview Content views also use filters to include or restrict certain Yum content. Without these filters, a content view includes everything from the selected repositories. There are two types of content filters: Table 7.1. Filter types Filter Type Description Include You start with no content, then select which content to add from the selected repositories. Use this filter to combine multiple content items. Exclude You start with all content from selected repositories, then select which content to remove. Use this filter when you want to use most of a particular content repository while excluding certain packages. The filter uses all content in the repository except for the content you select. Include and Exclude filter combinations If using a combination of Include and Exclude filters, publishing a content view triggers the include filters first, then the exclude filters. In this situation, select which content to include, then which content to exclude from the inclusive subset. Content types You can filter content based on the following content types: Table 7.2. Content types Content Type Description RPM Filter packages based on their name and version number. The RPM option filters non-modular RPM packages and errata. Source RPMs are not affected by this filter and will still be available in the content view. Package Group Filter packages based on package groups. The list of package groups is based on the repositories added to the content view. Erratum (by ID) Select which specific errata to add to the filter. The list of Errata is based on the repositories added to the content view. Erratum (by Date and Type) Select a issued or updated date range and errata type (Bugfix, Enhancement, or Security) to add to the filter. Module Streams Select whether to include or exclude specific module streams. The Module Streams option filters modular RPMs and errata, but does not filter non-modular content that is associated with the selected module stream. Container Image Tag Select whether to include or exclude specific container image tags. 7.12. Resolving package dependencies Satellite can add dependencies of packages in a content view to the dependent repository when publishing the content view. To configure this, you can enable dependency solving . For example, dependency solving is useful when you incrementally add a single package to a content view version. You might need to enable dependency solving to install that package. However, dependency solving is unnecessary in most situations. For example: When incrementally adding a security errata to a content view, dependency solving can cause significant delays to content view publication without major benefits. Packages from a newer erratum might have dependencies that are incompatible with packages from an older content view version. Incrementally adding the erratum by solving dependencies might result in the inclusion of unwanted packages. As an alternative, consider updating the content view. Note Dependency solving only considers packages within the repositories of the content view. It does not consider packages installed on clients. For example, if a content view includes only AppStream, dependency solving does not include dependent BaseOS content at publish time. For more information, see Limitations to Repository Dependency Resolution in Managing content . Dependency solving can lead to the following problems: Significant delay in content view publication Satellite examines every repository in a content view for dependencies. Therefore, publish time increases with more repositories. To mitigate this problem, use multiple content views with fewer repositories and combine them into composite content views. Ignored content view filters on dependent packages Satellite prioritizes resolving package dependencies over the rules in your filter. For example, if you create a filter for security purposes but enable dependency solving, Satellite can add packages that you might consider insecure. To mitigate this problem, carefully test filtering rules to determine the required dependencies. If dependency solving includes unwanted packages, manually identify the core basic dependencies that the extra packages and errata need. Example 7.2. Combining exclusion filters with dependency solving For example, you can recreate Red Hat Enterprise Linux 8.3 by using content view filters and include selected errata from a later Red Hat Enterprise Linux 8 minor release. To achieve this, you create filters to exclude most of the errata after the Red Hat Enterprise Linux 8.3 release date, except a few that you need. Then, you enable dependency solving. In this situation, dependency solving might include more packages than expected. As a result, the host diverges from being a Red Hat Enterprise Linux 8.3 machine. If you do not need the extra errata and packages, do not configure content view filtering. Instead, enable and use the Red Hat Enterprise Linux 8.3 repository on the Content > Red Hat Repositories page in the Satellite web UI. Example 7.3. Excluding packages sometimes makes dependency solving impossible for DNF If you make a Red Hat Enterprise Linux 8.3 repository with a few excluded packages, dnf upgrade can sometimes fail. Do not enable dependency solving to resolve the problem. Instead, investigate the error from dnf and adjust the filters to stop excluding the missing dependency. Else, dependency solving might cause the repository to diverge from Red Hat Enterprise Linux 8.3. 7.13. Enabling dependency solving for a content view Use this procedure to enable dependency solving for a content view. Prerequisites Dependency solving is useful only in limited contexts. Before enabling it, ensure you read and understand Section 7.12, "Resolving package dependencies" Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . From the list of content views, select the required content view. On the Details tab, toggle Solve dependencies . 7.14. Content filter examples Use any of the following examples with the procedure that follows to build custom content filters. Note Filters can significantly increase the time to publish a content view. For example, if a content view publish task completes in a few minutes without filters, it can take 30 minutes after adding an exclude or include errata filter. Example 1 Create a repository with the base Red Hat Enterprise Linux packages. This filter requires a Red Hat Enterprise Linux repository added to the content view. Filter: Inclusion Type: Include Content Type: Package Group Filter: Select only the Base package group Example 2 Create a repository that excludes all errata, except for security updates, after a certain date. This is useful if you want to perform system updates on a regular basis with the exception of critical security updates, which must be applied immediately. This filter requires a Red Hat Enterprise Linux repository added to the content view. Filter: Inclusion Type: Exclude Content Type: Erratum (by Date and Type) Filter: Select only the Bugfix and Enhancement errata types, and clear the Security errata type. Set the Date Type to Updated On . Set the Start Date to the date you want to restrict errata. Leave the End Date blank to ensure any new non-security errata is filtered. Example 3 A combination of Example 1 and Example 2 where you only require the operating system packages and want to exclude recent bug fix and enhancement errata. This requires two filters attached to the same content view. The content view processes the Include filter first, then the Exclude filter. Filter 1: Inclusion Type: Include Content Type: Package Group Filter: Select only the Base package group Filter 2: Inclusion Type: Exclude Content Type: Erratum (by Date and Type) Filter: Select only the Bugfix and Enhancement errata types, and clear the Security errata type. Set the Date Type to Updated On . Set the Start Date to the date you want to restrict errata. Leave the End Date blank to ensure any new non-security errata is filtered. Example 4 Filter a specific module stream in a content view. Filter 1: Inclusion Type: Include Content Type: Module Stream Filter: Select only the specific module stream that you want for the content view, for example ant , and click Add Module Stream . Filter 2: Inclusion Type: Exclude Content Type: Package Filter: Add a rule to filter any non-modular packages that you want to exclude from the content view. If you do not filter the packages, the content view filter includes all non-modular packages associated with the module stream ant . Add a rule to exclude all * packages, or specify the package names that you want to exclude. For another example of how content filters work, see the following article: "How do content filters work in Satellite 6" . 7.15. Creating a content filter for Yum content You can filter content views containing Yum content to include or exclude specific packages, package groups, errata, or module streams. Filters are based on a combination of the name , version , and architecture . To use the CLI instead of the Satellite web UI, see the CLI procedure . For examples of how to build a filter, see Section 7.14, "Content filter examples" . Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Select a content view. On the Filters tab, click Create filter . Enter a name. From the Content type list, select a content type. From the Inclusion Type list, select either Include filter or Exclude filter . Optional: In the Description field, enter a description for the filter. Click Create filter to create your content filter. Depending on what you enter for Content Type , add rules to create the filter that you want. Select if you want the filter to Apply to subset of repositories or Apply to all repositories . Click Publish New Version to publish the filtered repository. Optional: In the Description field, enter a description of the changes. Click Create filter to publish a new version of the content view. You can promote this content view across all environments. CLI procedure Add a filter to the content view. Use the --inclusion false option to set the filter to an Exclude filter: Add a rule to the filter: Publish the content view: Promote the view across all environments: 7.16. Deleting multiple content view versions You can delete multiple content view versions simultaneously. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Select the content view you want to delete versions of. On the Versions tab, select the checkbox of the version or versions you want to delete. Click the vertical ellipsis icon at the top of the list of content views. Click Delete to open the deletion wizard that shows any affected environments. If there are no affected environments, review the details and click Delete . If there are any affected environments, reassign any hosts or activation keys before deletion. Review the details of the actions. Click Delete . 7.17. Clearing the search filter If you search for specific content types by using keywords in the Search text box and the search returns no results, click Clear search to clear all the search queries and reset the Search text box. If you use a filter to search for specific repositories in the Type text box and the search returns no results, click Clear filters to clear all active filters and reset the Type text box. 7.18. Standardizing content view empty states If there are no filters listed for a content view, click Create filter . A modal opens to show you the steps to create a filter. Follow these steps to add a new filter to create new content types. 7.19. Comparing content view versions Use this procedure to compare content view version functionality for Satellite. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Select a content view whose versions you want to compare. On the Versions tab, select the checkbox to any two versions you want to compare. Click Compare . The Compare screen has the pre-selected versions in the version dropdown menus and tabs for all content types found in either version. You can filter the results to show only the same, different, or all content types. You can compare different content view versions by selecting them from the dropdown menus. 7.20. Distributing archived content view versions The setting Distribute archived content view versions enables hosting of non-promoted content view version repositories in the Satellite content web application along with other repositories. This is useful while debugging to see what content is present in your content view versions. Procedure In the Satellite web UI, navigate to Administer > Settings . Click the Content tab. Set the Distribute archived content view versions parameter to Yes . Click Submit . This enables the repositories of content view versions without lifecycle environments to be distributed at satellite.example.com/pulp/content/ My_Organization /content_views/ My_Content_View / My_Content_View_Version / . Note Older non-promoted content view versions are not distributed once the setting is enabled. Only new content view versions become distributed. | [
"hammer repository list --organization \" My_Organization \"",
"hammer content-view create --description \" My_Content_View \" --name \" My_Content_View \" --organization \" My_Organization \" --repository-ids 1,2",
"hammer content-view publish --description \" My_Content_View \" --name \" My_Content_View \" --organization \" My_Organization \"",
"hammer content-view add-repository --name \" My_Content_View \" --organization \" My_Organization \" --repository-id repository_ID",
"hammer content-view copy --name My_original_CV_name --new-name My_new_CV_name",
"hammer content-view copy --id=5 --new-name=\"mixed_copy\" Content view copied.",
"hammer capsule content synchronize --content-view \"my content view name\"",
"hammer organization list",
"hammer module-stream list --organization-id My_Organization_ID",
"hammer content-view version promote --content-view \"Database\" --version 1 --to-lifecycle-environment \"Development\" --organization \" My_Organization \" hammer content-view version promote --content-view \"Database\" --version 1 --to-lifecycle-environment \"Testing\" --organization \" My_Organization \" hammer content-view version promote --content-view \"Database\" --version 1 --to-lifecycle-environment \"Production\" --organization \" My_Organization \"",
"ORG=\" My_Organization \" CVV_ID= My_Content_View_Version_ID for i in USD(hammer --no-headers --csv lifecycle-environment list --organization USDORG | awk -F, {'print USD1'} | sort -n) do hammer content-view version promote --organization USDORG --to-lifecycle-environment-id USDi --id USDCVV_ID done",
"hammer content-view version info --id My_Content_View_Version_ID",
"hammer content-view version list --organization \" My_Organization \"",
"hammer content-view create --composite --auto-publish yes --name \" Example_Composite_Content_View \" --description \"Example composite content view\" --organization \" My_Organization \"",
"hammer content-view component add --component-content-view-id Content_View_ID --composite-content-view \" Example_Composite_Content_View \" --latest --organization \" My_Organization \"",
"hammer content-view component add --component-content-view-id Content_View_ID --composite-content-view \" Example_Composite_Content_View \" --component-content-view-version-id Content_View_Version_ID --organization \" My_Organization \"",
"hammer content-view publish --name \" Example_Composite_Content_View \" --description \"Initial version of composite content view\" --organization \" My_Organization \"",
"hammer content-view version promote --content-view \" Example_Composite_Content_View \" --version 1 --to-lifecycle-environment \"Development\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Composite_Content_View \" --version 1 --to-lifecycle-environment \"Testing\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Composite_Content_View \" --version 1 --to-lifecycle-environment \"Production\" --organization \" My_Organization \"",
"hammer content-view filter create --name \" Errata Filter \" --type erratum --content-view \" Example_Content_View \" --description \" My latest filter \" --inclusion false --organization \" My_Organization \"",
"hammer content-view filter rule create --content-view \" Example_Content_View \" --content-view-filter \" Errata Filter \" --start-date \" YYYY-MM-DD \" --types enhancement,bugfix --date-type updated --organization \" My_Organization \"",
"hammer content-view publish --name \" Example_Content_View \" --description \"Adding errata filter\" --organization \" My_Organization \"",
"hammer content-view version promote --content-view \" Example_Content_View \" --version 1 --to-lifecycle-environment \"Development\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Content_View \" --version 1 --to-lifecycle-environment \"Testing\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Content_View \" --version 1 --to-lifecycle-environment \"Production\" --organization \" My_Organization \""
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_content/managing_content_views_content-management |
Chapter 8. Machine Config Daemon metrics overview | Chapter 8. Machine Config Daemon metrics overview The Machine Config Daemon is a part of the Machine Config Operator. It runs on every node in the cluster. The Machine Config Daemon manages configuration changes and updates on each of the nodes. 8.1. Understanding Machine Config Daemon metrics Beginning with OpenShift Container Platform 4.3, the Machine Config Daemon provides a set of metrics. These metrics can be accessed using the Prometheus Cluster Monitoring stack. The following table describes this set of metrics. Some entries contain commands for getting specific logs. However, the most comprehensive set of logs is available using the oc adm must-gather command. Note Metrics marked with * in the Name and Description columns represent serious errors that might cause performance problems. Such problems might prevent updates and upgrades from proceeding. Table 8.1. MCO metrics Name Format Description Notes mcd_host_os_and_version []string{"os", "version"} Shows the OS that MCD is running on, such as RHCOS or RHEL. In case of RHCOS, the version is provided. mcd_drain_err* Logs errors received during failed drain. * While drains might need multiple tries to succeed, terminal failed drains prevent updates from proceeding. The drain_time metric, which shows how much time the drain took, might help with troubleshooting. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_pivot_err* []string{"err", "node", "pivot_target"} Logs errors encountered during pivot. * Pivot errors might prevent OS upgrades from proceeding. For further investigation, run this command to see the logs from the machine-config-daemon container: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_state []string{"state", "reason"} State of Machine Config Daemon for the indicated node. Possible states are "Done", "Working", and "Degraded". In case of "Degraded", the reason is included. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_kubelet_state* Logs kubelet health failures. * This is expected to be empty, with failure count of 0. If failure count exceeds 2, the error indicating threshold is exceeded. This indicates a possible issue with the health of the kubelet. For further investigation, run this command to access the node and see all its logs: USD oc debug node/<node> - chroot /host journalctl -u kubelet mcd_reboot_err* []string{"message", "err", "node"} Logs the failed reboots and the corresponding errors. * This is expected to be empty, which indicates a successful reboot. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_update_state []string{"config", "err"} Logs success or failure of configuration updates and the corresponding errors. The expected value is rendered-master/rendered-worker-XXXX . If the update fails, an error is present. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon Additional resources About OpenShift Container Platform monitoring Gathering data about your cluster | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_configuration/machine-config-daemon-metrics |
Subsets and Splits